As we were touring way too many preschools, I got to take a peek into many of their teacher and parent libraries (or their directors' offices), and one book I kept seeing was Mind in the Making: The Seven Essential Life Skills Every Child Needs by Ellen Galinsky. I got a chance to read it recently, and I found it has a good concise summary of much of the research on childhood learning to date and many of the lessons I had read in other books. I personally learned more from Einstein Never Used Flashcards by Kathy Hirsh-Pasek (whom Ellen cites many times in her book), but Ellen's book was still informative and interesting.
The parts that I found most useful were the concrete examples of games and activities that can help develop some of the "essential life skills" she mentions. It's always a balancing act between letting kids just follow their own self-directed learning adventures and coming up with suggestions of activities or games as the parent. This book provides lots of ideas to consider when needing to be more of a guide or when helping to foster a child's personal interest, which Galinsky calls the child's personal "lemonade stand."
My full notes on the book are below.
I'm a huge fan of Nassim Nicholas Taleb's work and thinking.
My favorite (most mind-altering) book from a few years ago was Antifragile. I also enjoyed his book of aphorisms, The Bed of Procrustes.
Where Antifragile leaves off, there begins Skin in the Game: Hidden Asymmetries in Daily Life by Nassim Nicholas Taleb. I found that this last book builds on and combines a lot of the lessons in Fooled by Randomness, Black Swan, and Antifragile. It was really good.
I also felt in many ways personally humbled and called out because I have, in the past and certainly in some ways in the present, fallen into the traps of scientism, IYI-ness, brain porn, etc. -- all the ways that our thinking can go wrong when not driven by skin in the game and survival-focused rationality.
The concept of ergodicity was the toughest to grasp, and I felt like it could have been explained more clearly and in more depth with more examples, but after doing some careful re-reading, I think I got the essence of it. It presents a very useful lesson for thinking about real-world decisions and which class of probabilities the risks fall in: ergodic (not subject to absorbing barriers of ruin) or non-ergodic (subject to scenarios of total ruin, where traditional cost-benefit analysis and simple [academic] "probabilistic reasoning" doesn't make sense). It's so easy to forget this and keep going along doing the same type of pseudo-rational thinking that was drilled in us in college in "decision theory" classes.
I like the counterbalance in his writing between intense technical rigor (see the technical appendix filled with formulas and proofs) and street smart "tawk" (calling on the wisdom of grandmothers and the ancients to point out what obviously makes sense in some situations rather than what "scientism" can delude us into believing [GMOs, etc.]).
I wasn't a big fan of all the political name bashing and calling out of Monsanto shills, Hillary Monsanto Malmaison, and other such things, but I do get that this is part of the system of virtue where he cares a lot about calling out frauds by their true name and not caring what others think.
There were a lot of valuable and practical lessons in the book, and some of my main takeaways are here:
I keep wondering how I can keep these fresh in my mind going forward and keep applying these to make tangible changes in my life (especially via negativa-wise) and become less of an IYI over time.
Other (somewhat unresolved) questions this book has prompted me to think about:
My full notes on the book span 35 pages, but a collection of the points that were most relevant for me is below.
A good friend of mine who's an engineer and entrepreneur and really into AI recommended that I check out Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. I just finished reading it and thought it was pretty eye-opening and scary.
I enjoyed the historical overview of the field of AI, and the many examples of current programs and how they rank against humans in various domains.
My biggest takeaways:
-McCarthy’s dictum: When something works, it’s no longer called AI.
-Quote from Knuth: computers can now do most of the things that require thinking but very few of the things that we or animals can do without thinking.
-If/when general AI is solved, the transition to superintelligence (above-human level intelligence) will happen too fast to respond to it at the time, so we should think and plan ahead.
-It's really hard to design objective functions/values for AI. Most strategies that on first order seem ok are really bad when considering second and third order effects.
-The most likely scenario is that we will get something wrong and basically be screwed. This is quite scary.
-Approaches like whole brain emulation seem interesting but really difficult to pull off in practice.
-The indirect value loading approach ("the AI should try to maximize whatever most of us would want it to maximize had we thought about it long and hard") seems interesting and compelling (and was new to me).
I'm personally skeptical we will ever achieve general AI. I think we'll just get better and better at domain-specific applications, but I don't think we'll ever figure out how to artificially make a machine think in the way we do (or for that matter understand how we truly think). I think it's just one of those mysteries that will never be fully solved.
I kind of lost steam about 2/3 of the way through the book when the level of detailed analysis of very futuristic scenarios seemed kind of overboard to me. I thought that it was hard to really effectively reason about situations which will in all likelihood be very different from anything we can imagine right now. It's good to be cautious and try to plan ahead, but I just thought it got too much into the weeds and too fine-grained for the immense uncertainty in question.
My full notes on the book are below.