No easy solutions
Might the risks of catastrophic AI malfunction be averted by putting more trust in the operation of the free market, by having more faith in God / karma / Gaia, by planning for a backup human colony on Mars, or by enhancing human brains as fast as AI itself advances?
These ideas have gained occasional support in the public discussion on AI. However, all of them are fraught with dangers.
The unreliability of the free market
Imagine two manufacturers of robots. One sort of robot occasionally harms humans. The other always acts beneficially. Therefore the company that makes the robots that always act beneficially will do well, economically, whereas the other one will go out of business. Among robot manufacturers, the only companies that will survive are the ones creating beneficial robots. Accordingly, we don’t need to worry about robots (or other forms of AI) causing catastrophic problems for humanity.
This argument may have some superficial attraction, but it is, alas, full of holes:
- A robot that has a long history of treating humans well could flip into an error mode in a new circumstance, which causes it to treat humans terribly badly instead.
- A robot that is occasionally unreliable may sell at price significantly lower than one which has more comprehensive safety engineering; so it will retain some market share.
- Companies that create products that harm humans often do well in the economy. Their products are sometimes bought because of their roles in manipulating people, spying on people, deceiving people, incentivising people to gamble irresponsibly (Las Vegas style and/or Wall Street style), or even – as part of military solutions – detaining people or killing them.
- Some products that satisfy all the people directly involved in the economic transaction – the vendor, the purchaser, and the users of the product – nevertheless have terrible “negative externalities” that damage the wider environment or society.
For some examples, see this page on the risks and benefits of fast-changing technology.
These observations are still compatible with the free market having an important role to play in accelerating the development and adoption of beneficial AI solutions. However, to obtain these benefits, the operation of the free market must be constrained and steered by the Singularity Principles.
The unreliability of faith in cosmic destiny
Next, consider an argument that it rarely made explicitly, but which seems to lie at the back of some writers’ minds. The argument is that humanity’s experience with AI and robots is just one more step on an ongoing sequence of events, in which, each time, humanity has survived, and (on the whole) become stronger.
Is there an explanation for this sequence of survival and progress? One idea is that the explanation lies in forces outside humanity:
- A divine being, akin to those discussed in traditional religions
- A cosmic cycle of ebb and flow, cause and effect, loosely similar to the Hindu notion of karma
- More modern concepts such as Gaia, that regards the earth’s biosphere as inherently self-sustaining
- The notion that the universe we observe is a simulated creation of a being outside it, as in the idea of the Simulation Hypothesis
- The concept that humanity is one link in a secure chain of cosmic evolution, described by “the law of accelerating returns” as articulated by futurist Ray Kurzweil.
The problem, in each case, is not just that it is debatable whether such a force exists in any meaningful way. The more serious problem is that the observed history of humanity contains many catastrophes: civilisations ending, ecosystems being ruined, large-scale genocide being committed, and so on.
A person of faith might respond: In each case, the catastrophe has been local. Large proportions of humans may have died, but enough humans survived to continue the species.
However, there are two deep flaws with this response:
- As technology becomes more powerful, it increases the chances that a catastrophe would result in human extinction
- Even if a catastrophe results in a portion of humans surviving, the large numbers of deaths involved is something that should raise our deep concern, and we should take every measure to prevent it from occurring.
In contrast with any such attitude of faith in cosmic powers, the Singularity Principles embody the active transhumanist conviction that the future of humanity can be strongly influenced by human thoughts and human actions. This conviction is summarised as follows:
- Radical opportunity: The near future can be much better than the present situation. The human condition can be radically improved, compared to what we’ve inherited from evolution and history.
- Existential danger: The near future can be much worse than the present situation. Misuse of powerful technology can have catastrophic consequences.
- Human agency: The difference between these two radical future options depends critically on human agency: wise human thinking and concerted human action.
- No easy options: If humanity gives too little attention to these radical future options, on account of distraction, incomprehension, or intimidation, there’s a high likelihood of a radically bad outcome.
Consider the idea of humanity establishing a backup colony on another planet, such as Mars. Then if something goes wrong on Earth, the community on Mars will avoid destruction. It will live on, safe and sound.
It’s true that some kinds of planetary disaster, such as runaway climate change, would impact only the original planet. However, other types of global catastrophe are likely to cast their malign influence all the way from Earth to Mars. For example, a superhuman AI that decides that humanity is a blight on the cosmos will likely be able to track down and neutralise any humans that are hiding on a different planet.
In any case, this whole approach seems to make its peace far too easily with the awful possibility that all human life on Earth is destroyed. That’s a possibility we should work a lot harder to avoid, rather than escaping to Mars.
Therefore, whilst there are good arguments for humans to explore other planets and create settlements there, creating a secure solution against existential threats isn’t one of these arguments.
Humans merging with AI?
Consider the idea that, if humans merge with AI, humans could remain in control of AIs, even as these AIs rapidly become more powerful. With such a merger in place, human intelligence will automatically be magnified, as AI improves in capability. Therefore, we humans wouldn’t need to worry about being left behind.
One problem with this idea is that, so long as human intelligence is rooted in something like the biology of the brain, the mechanisms for any such merger may only allow relatively modest increases in human intelligence. If silicon-based AIs were to become one thousand times smarter over a period of time, humans whose brains are linked to these AIs might experience only a tenfold increase in intelligence. Our biological brains would constrain the speed of progress. The human-AI hybrid would, after all, be left behind in this intelligence race. So much for staying in control.
A considerably bigger problem with this idea is the realisation that a human with superhuman intelligence is likely to be just as dangerous as an AI with superhuman intelligence. The magnification of intelligence will allow that superhuman human to do all kinds of things with great vigour – settling grudges, acting out fantasies, demanding attention, pursuing vanity projects, and so on. Just think of your least favourite politician, terrorist leader, crime lord, religious fanatic, media tycoon, or corporate robber baron. Imagine that person with much greater power, due to being much more intelligent. Such a person would be able to destroy the earth. Worse, they might want to do so.
That’s an argument that we need to do considerably more than enable greater intelligence. We also need to accelerate greater wisdom – so that any beings with superhuman intelligence will operate truly beneficently. And that will involve the systematic application of the Singularity Principles.