AIs Will Be Our Mind ChildrenWe must free our artificial descendants to adapt to their new worlds and choose what they will become.
Robin Hanson
6 Aug 2023
As I’m now 63 years old, our world today is “the future” that science fiction and futurism promised me decades ago. And I’m a bit disappointed. Yes, that’s in part due to my unrealistic hopes, but it is also due to our hesitation and fear. Our world would look very different if we hadn’t so greatly restrained so many promising techs, like nuclear energy, genetic engineering, and creative financial assets. Yes, we might have had a few more accidents, but overall it would be a better world. For example, we’d have flying cars.
Thankfully, we didn’t much fear computers, and so haven’t much restrained them. And not coincidentally, computers are where we’ve seen the most progress. As someone who was a professional AI researcher from 1984 to 1993, I am proud of our huge AI advances over subsequent decades, and of our many exciting AI jumps in just the last year.
Alas, many now push for strong regulation of AI. Some fear villains using AI to increase their powers, and some seek to control what humans who hear AIs might believe. But the most dramatic “AI doomers” say that AIs are likely to kill us all. For example, a recent petition demanded a six-month moratorium on certain kinds of AI research. Many luminaries also declared:
Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.
Lead AI-doomer rationalist Eliezer Yudkowsky even calls for a complete and indefinite global “shut down” of AI research, because “the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die.” Mainstream media is now full of supportive articles quoting such doomers far more often than they do their critics.
AI-doomers often suggest that their fears arise from special technical calculations. But in fact, their main argument is just the mere logical possibility of a huge sudden AI breakthrough, combined with a suddenly murderous AI inclination.
However, we have no concrete reason to expect that. Humans have been improving automation for centuries, and software for 75 years. And as innovation is mostly made of many small gains, rates of overall economic and tech growth have remained relatively steady and predictable. For example, we predicted when computers would beat humans at chess decades in advance, and current AI abilities are not very far from what we should have expected given long-run trends. Furthermore, not only are AIs still at least decades from being able to replace humans on most of today’s job tasks, AIs are much further from the far greater abilities that would be required for one of them to kill everyone, by overwhelming all of humanity plus all other AIs active at the time.
In addition, AIs are also now quite far from being inclined to kill us, even if they could do so. Most AIs are just tools that do particular tasks when so instructed. Some are more general agents, for whom it makes more sense to talk about desires. But such AI agents are typically monitored and tested frequently and in great detail to check for satisfactory behaviour. So it would be quite a sudden radical change for AIs to, in effect, try to kill all humans.
* * *
Source:
https://quillette.com/2023/08/06/ais-will-be-our-mind-children/