No easy solutions
Might the risks of catastrophic AI malfunction be averted by putting more trust in the operation of the free market, by having more faith in God / karma / Gaia, by planning for a backup human colony on Mars, or by enhancing human brains as fast as AI itself advances?
These ideas have gained occasional support in the public discussion on AI. However, as I’ll now review, they are all fraught with dangers.
No guarantees from the free market
Consider the following argument:
- Imagine two manufacturers of robots. The robots from one manufacturer occasionally harm humans. The robots from the other manufacturer always act beneficially.
- Purchasers will clearly prefer the second kind of robot.
- The company that makes the robots that always act beneficially will do well, economically, whereas the other one will go out of business. Among robot manufacturers, the only companies that will survive are the ones creating beneficial robots.
- Accordingly, we don’t need to worry about robots (or other forms of AI) causing catastrophic problems for humanity. The free market will ensure that companies who might have gone on to create such robots will cease their operations before such an outcome occurs.
This argument may have some superficial attraction, but it is, alas, full of holes:
- A robot that has a long history of treating humans with benevolence could flip into an error mode in a new circumstance, which causes it to treat humans abysmally instead.
- A robot that is occasionally unreliable may sell at a price significantly lower than one which has more comprehensive safety engineering; so it may retain some market share, despite the quality issues.
- Companies that create products that harm humans often do well in the economy. Their products are sometimes bought because of their roles in spying on people, deceiving people, manipulating people, incentivising people to gamble irresponsibly (Las Vegas style and/or Wall Street style), or even – as part of military solutions – detaining people or killing them.
- Some products that satisfy all the people directly involved in the economic transaction – the vendor, the purchaser, and the users of the product – nevertheless have terrible “negative externalities” that damage the wider environment or society.
These observations are still compatible with the free market having an important role to play in accelerating the development and adoption of truly beneficial AI solutions. However, to obtain these benefits, the operation of the free market must be constrained and steered by the Singularity Principles.
No guarantees from cosmic destiny
Next, consider an argument that is rarely made explicitly, but which seems to lie at the back of some writers’ minds. The argument is that humanity’s experience with AI and robots is just one more step in an ongoing sequence of events, in which, each time, humanity has survived and (on the whole) become stronger.
Is there an explanation for this sequence of survival and progress? The argument suggests that an explanation might involve forces outside humanity. Examples of these forces could include the following:
- A divine being, akin to those discussed in traditional religions
- A cosmic cycle of ebb and flow, cause and effect, loosely similar to the Hindu notion of karma
- More recent concepts such as Gaia, which regards the earth’s biosphere as inherently self-sustaining
- The notion that the universe we observe is a simulated creation of a being outside it, as in the idea of the Simulation Hypothesis
- The concept that humanity is one link in a secure chain of cosmic evolution, described by “the law of accelerating returns” as propounded by futurist Ray Kurzweil.
The problem, in each case, is not just that it is debatable whether such a force exists in any meaningful way. The more serious problem is that the observed history of humanity contains many catastrophes: civilisations ending, large-scale genocide, ecosystems being ruined, and so on.
A person of faith might respond: In each case so far, the catastrophe has been local. Large proportions of humans may have died, but enough humans survived to continue the species.
However, there are two deep flaws with this response:
- As technology becomes more powerful, it increases the chances that a catastrophe would result in human extinction, globally rather than just locally
- Even if a catastrophe results in a portion of humans surviving, the large numbers of deaths involved is something that should raise our deep concern, and we should take every measure to prevent it from occurring.
In contrast with any such attitude of faith in cosmic powers, the Singularity Principles embody the active transhumanist conviction that the future of humanity can be strongly influenced by human thoughts and human actions. This conviction is summarised as follows:
- Radical opportunity: The near future can be much better than the present situation. The human condition can be radically improved, compared to what we’ve inherited from evolution and history.
- Existential danger: The near future can be much worse than the present situation. Misuse of powerful technology can have catastrophic consequences.
- Human agency: The difference between these two radical future options depends critically on human agency: wise human thinking and concerted human action.
- No easy options: If humanity gives too little attention to these radical future options, on account of distraction, incomprehension, or intimidation, there’s a high likelihood of a radically bad outcome.
Consider the idea of humanity establishing a backup colony on another planet, such as Mars. Then if something goes wrong on Earth, the community on Mars will avoid destruction. It will live on, safe and sound.
It’s true that some kinds of planetary disaster, such as runaway climate change, would impact only the original planet. However, other types of global catastrophe are likely to cast their malign influence all the way from Earth to Mars. For example, a superhuman AI that decides that humanity is a blight on the cosmos will likely be able to track down and neutralise any humans that are hiding on a different planet.
In any case, this whole approach seems to make its peace far too easily with the awful possibility that all human life on Earth is destroyed. That’s a possibility we should work a lot harder to avoid, rather than escaping to Mars.
Therefore, whilst there are good arguments for humans to explore other planets and create settlements there, creating a secure solution against existential threats isn’t one of these arguments.
Humans merging with AI?
Finally, consider the idea that, if humans merge with AI, humans could remain in control of AIs, even as these AIs rapidly become more powerful. With such a merger in place, human intelligence will automatically be magnified, as AI improves in capability. Therefore, we humans wouldn’t need to worry about being left behind.
There are two big problems with this idea. First, so long as human intelligence is rooted in something like the biology of the brain, the mechanisms for any such merger may only allow relatively modest increases in human intelligence. To suggest some numbers: if silicon-based AIs were to become one thousand times smarter over a period of time, humans whose brains are linked to these AIs might experience only a tenfold increase in intelligence. Our biological brains would be bottlenecks that constrain the speed of progress in this hybrid case. Compared to pure AIs, the human-AI hybrid would, after all, be left behind in this intelligence race. So much for staying in control!
An even bigger problem with this idea is the realisation that a human with superhuman intelligence is likely to be at least as dangerous as an AI with superhuman intelligence. The magnification of intelligence will allow that superhuman human to do all kinds of things with great vigour – settling grudges, acting out fantasies, demanding attention, pursuing vanity projects, and so on. Just think of your least favourite politician, terrorist leader, crime lord, religious fanatic, media tycoon, or corporate robber baron. Imagine that person with much greater power, due to being much more intelligent. Such a person would be able to destroy the earth. Worse, they might want to do so.
Another way to state this point is that, just because AI elements are included inside a person, that won’t magically ensure that these elements become benign, or are subject to the full control of the person’s best intentions. Consider as comparisons what happens when biological viruses enter a person’s body, or when a cancer grows there. In neither case does the element lose its ability to cause damage, just on account of being part of a person who has humanitarian instincts.
The conclusion of this line of discussion is that we need to do considerably more than enable greater intelligence. We also need to accelerate greater wisdom – so that any beings with superhuman intelligence will operate truly beneficently. And that will involve the systematic application of the Singularity Principles.
Approaching the Singularity
Since no easy answers are at hand, it’s time to search more vigorously for harder answers.
These answers will emerge from looking more closely at scenarios for what is likely to happen as AI becomes more powerful.
It’s time, therefore, to turn our attention to the concept of the Singularity. This will involve untangling a series of awkward confusions.