A good friend of mine who's an engineer and entrepreneur and really into AI recommended that I check out Superintelligence: Paths, Dangers, Strategies by Nick Bostrom. I just finished reading it and thought it was pretty eye-opening and scary.
I enjoyed the historical overview of the field of AI, and the many examples of current programs and how they rank against humans in various domains.
My biggest takeaways:
-McCarthy’s dictum: When something works, it’s no longer called AI.
-Quote from Knuth: computers can now do most of the things that require thinking but very few of the things that we or animals can do without thinking.
-If/when general AI is solved, the transition to superintelligence (above-human level intelligence) will happen too fast to respond to it at the time, so we should think and plan ahead.
-It's really hard to design objective functions/values for AI. Most strategies that on first order seem ok are really bad when considering second and third order effects.
-The most likely scenario is that we will get something wrong and basically be screwed. This is quite scary.
-Approaches like whole brain emulation seem interesting but really difficult to pull off in practice.
-The indirect value loading approach ("the AI should try to maximize whatever most of us would want it to maximize had we thought about it long and hard") seems interesting and compelling (and was new to me).
I'm personally skeptical we will ever achieve general AI. I think we'll just get better and better at domain-specific applications, but I don't think we'll ever figure out how to artificially make a machine think in the way we do (or for that matter understand how we truly think). I think it's just one of those mysteries that will never be fully solved.
I kind of lost steam about 2/3 of the way through the book when the level of detailed analysis of very futuristic scenarios seemed kind of overboard to me. I thought that it was hard to really effectively reason about situations which will in all likelihood be very different from anything we can imagine right now. It's good to be cautious and try to plan ahead, but I just thought it got too much into the weeds and too fine-grained for the immense uncertainty in question.
My full notes on the book are below.
Fable of owls and sparrows and control problem
Finding an AI vs first learning how to tame it (but how to do so without an AI first)
1 past developments and current capabilities
Growth modes and big history
Step changes in growth modes
First intelligent machine is that last invention man will ever make
McCarthy’s dictum: when something works, it’s no longer called AI
Knuth: computers can now do most of the things that require thinking but very few of the things that animals can do without thinking
Preinstalled automatic safety technology to stop algos when they go wrong
2 Paths to superintelligence
Better than human level machine intelligence (HLMI)
Whole brain emulation by scanning brain into computer and simulating every electrical impulse and thought
Genetic selection and breeding to achieve superintelligence
Gene spell checking, proofreading to remove mutations
Genetic synthesis of ideal sequences
Brain computer interfaces
Networking brains and organizations
3 forms of superintelligence
Just like human but faster
Collective of minds
Same or better speed but much better quality
4 kinetics of explosion of superintelligence
Slow takeoff over decades
Fast takeoff over days
Moderate takeoff over weeks
Recalcitrance to change
Optimization power divided by recalcitrance
5 decisive strategic advantage
Will one ai dominate or multiple
6 cognitive superpowers
Methods to take over
Send von Neumann probes to other planets to inhabit other planets by AIs
7 superintelligent will
Intelligence and motivation orthogonal
Instrumental convergence because some objectives useful for almost any goal
Goal content integrity
8 is the default outcome doom
Sandbox test won’t work because will fake it
9 control problem
Two agency problems
Capability control methods
Threat of simulator
Force running on worse hardware or less data
Honeypots to test evil intentions
Motivation selection methods
10 oracles, genies, sovereigns, tools
11 multipolar scenarios
Multiple competing superintelligent agents
Can help handle control problem
12 acquiring values
Value loading problem really hard
Could set up system for it to guess and learn the values itself
Philosophical problems of not knowing what we want
13 choosing what to choose
Coherent extrapolated volition
More robust approach
AI should do that which we would want it to do if we knew more and based on what most people would want as long as most would agree
Otherwise Hard to lay down rules without bad second order effects
Moral permissibility and moral rightness
14 strategic picture
15 crunch time
Philosophy with a deadline