No easy solutions

No easy solutions

Might the risks of catastrophic AI malfunction be averted by putting more trust in the operation of the free market, by having more faith in God / karma / Gaia, by planning for a backup human colony on Mars, or by enhancing human brains as fast as AI itself advances?

These ideas have gained occasional support in the public discussion on AI. However, as I’ll now review, they are all fraught with dangers.

No guarantees from the free market

Consider the following argument:

  • Imagine two manufacturers of robots. The robots from one manufacturer occasionally harm humans. The robots from the other manufacturer always act beneficially.
  • Purchasers will clearly prefer the second kind of robot.
  • The company that makes the robots that always act beneficially will do well, economically, whereas the other one will go out of business. Among robot manufacturers, the only companies that will survive are the ones creating beneficial robots.
  • Accordingly, we don’t need to worry about robots (or other forms of AI) causing catastrophic problems for humanity. The free market will ensure that companies who might have gone on to create such robots will cease their operations before such an outcome occurs.

This argument may have some superficial attraction, but it is, alas, full of holes:

  1. A robot that has a long history of treating humans with benevolence could flip into an error mode in a new circumstance, which causes it to treat humans abysmally instead.
  2. A robot that is occasionally unreliable may sell at a price significantly lower than one which has more comprehensive safety engineering; so it may retain some market share, despite the quality issues.
  3. Companies that create products that harm humans often do well in the economy. Their products are sometimes bought because of their roles in spying on people, deceiving people, manipulating people, incentivising people to gamble irresponsibly (Las Vegas style and/or Wall Street style), or even – as part of military solutions – detaining people or killing them.
  4. Some products that satisfy all the people directly involved in the economic transaction – the vendor, the purchaser, and the users of the product – nevertheless have terrible “negative externalities” that damage the wider environment or society.

These observations are still compatible with the free market having an important role to play in accelerating the development and adoption of truly beneficial AI solutions. However, to obtain these benefits, the operation of the free market must be constrained and steered by the Singularity Principles.

No guarantees from cosmic destiny

Next, consider an argument that is rarely made explicitly, but which seems to lie at the back of some writers’ minds. The argument is that humanity’s experience with AI and robots is just one more step in an ongoing sequence of events, in which, each time, humanity has survived and (on the whole) become stronger.

Is there an explanation for this sequence of survival and progress? The argument suggests that an explanation might involve forces outside humanity. Examples of these forces could include the following:

  • A divine being, akin to those discussed in traditional religions
  • A cosmic cycle of ebb and flow, cause and effect, loosely similar to the Hindu notion of karma
  • More recent concepts such as Gaia, which regards the earth’s biosphere as inherently self-sustaining
  • The notion that the universe we observe is a simulated creation of a being outside it, as in the idea of the Simulation Hypothesis
  • The concept that humanity is one link in a secure chain of cosmic evolution, described by “the law of accelerating returns” as propounded by futurist Ray Kurzweil.

The problem, in each case, is not just that it is debatable whether such a force exists in any meaningful way. The more serious problem is that the observed history of humanity contains many catastrophes: civilisations ending, large-scale genocide, ecosystems being ruined, and so on.

A person of faith might respond: In each case so far, the catastrophe has been local. Large proportions of humans may have died, but enough humans survived to continue the species.

However, there are two deep flaws with this response:

  • As technology becomes more powerful, it increases the chances that a catastrophe would result in human extinction, globally rather than just locally
  • Even if a catastrophe results in a portion of humans surviving, the large numbers of deaths involved is something that should raise our deep concern, and we should take every measure to prevent it from occurring.

In contrast with any such attitude of faith in cosmic powers, the Singularity Principles embody the active transhumanist conviction that the future of humanity can be strongly influenced by human thoughts and human actions. This conviction is summarised as follows:

  • Radical opportunity: The near future can be much better than the present situation. The human condition can be radically improved, compared to what we’ve inherited from evolution and history.
  • Existential danger: The near future can be much worse than the present situation. Misuse of powerful technology can have catastrophic consequences.
  • Human agency: The difference between these two radical future options depends critically on human agency: wise human thinking and concerted human action.
  • No easy options: If humanity gives too little attention to these radical future options, on account of distraction, incomprehension, or intimidation, there’s a high likelihood of a radically bad outcome.

Planet B?

Consider the idea of humanity establishing a backup colony on another planet, such as Mars. Then if something goes wrong on Earth, the community on Mars will avoid destruction. It will live on, safe and sound.

It’s true that some kinds of planetary disaster, such as runaway climate change, would impact only the original planet. However, other types of global catastrophe are likely to cast their malign influence all the way from Earth to Mars. For example, a superhuman AI that decides that humanity is a blight on the cosmos will likely be able to track down and neutralise any humans that are hiding on a different planet.

In any case, this whole approach seems to make its peace far too easily with the awful possibility that all human life on Earth is destroyed. That’s a possibility we should work a lot harder to avoid, rather than escaping to Mars.

Therefore, whilst there are good arguments for humans to explore other planets and create settlements there, creating a secure solution against existential threats isn’t one of these arguments.

Humans merging with AI?

Finally, consider the idea that, if humans merge with AI, humans could remain in control of AIs, even as these AIs rapidly become more powerful. With such a merger in place, human intelligence will automatically be magnified, as AI improves in capability. Therefore, we humans wouldn’t need to worry about being left behind.

There are two big problems with this idea. First, so long as human intelligence is rooted in something like the biology of the brain, the mechanisms for any such merger may only allow relatively modest increases in human intelligence. To suggest some numbers: if silicon-based AIs were to become one thousand times smarter over a period of time, humans whose brains are linked to these AIs might experience only a tenfold increase in intelligence. Our biological brains would be bottlenecks that constrain the speed of progress in this hybrid case. Compared to pure AIs, the human-AI hybrid would, after all, be left behind in this intelligence race. So much for staying in control!

An even bigger problem with this idea is the realisation that a human with superhuman intelligence is likely to be at least as dangerous as an AI with superhuman intelligence. The magnification of intelligence will allow that superhuman human to do all kinds of things with great vigour – settling grudges, acting out fantasies, demanding attention, pursuing vanity projects, and so on. Just think of your least favourite politician, terrorist leader, crime lord, religious fanatic, media tycoon, or corporate robber baron. Imagine that person with much greater power, due to being much more intelligent. Such a person would be able to destroy the earth. Worse, they might want to do so.

Another way to state this point is that, just because AI elements are included inside a person, that won’t magically ensure that these elements become benign, or are subject to the full control of the person’s best intentions. Consider as comparisons what happens when biological viruses enter a person’s body, or when a cancer grows there. In neither case does the element lose its ability to cause damage, just on account of being part of a person who has humanitarian instincts.

The conclusion of this line of discussion is that we need to do considerably more than enable greater intelligence. We also need to accelerate greater wisdom – so that any beings with superhuman intelligence will operate truly beneficently. And that will involve the systematic application of the Singularity Principles.

Approaching the Singularity

Since no easy answers are at hand, it’s time to search more vigorously for harder answers.

These answers will emerge from looking more closely at scenarios for what is likely to happen as AI becomes more powerful.

It’s time, therefore, to turn our attention to the concept of the Singularity. This will involve untangling a series of awkward confusions.

Recent Posts

RAFT 2035 – a new initiative for a new decade

The need for a better politics is more pressing than ever.

Since its formation, Transpolitica has run a number of different projects aimed at building momentum behind a technoprogressive vision for a better politics. For a new decade, it’s time to take a different approach, to build on previous initiatives.

The planned new vehicle has the name “RAFT 2035”.

RAFT is an acronym:

  • Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

RAFT is also a metaphor. Here’s a copy of the explanation:

When turbulent waters are bearing down fast, it’s very helpful to have a sturdy raft at hand.

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities – enormous opportunities and enormous risks:…

Rapid technological change tends to provoke a turbulent social reaction. Old certainties fade. New winners arrive on the scene, flaunting their power, and upturning previous networks of relationships. Within the general public, a sense of alienation and disruption mingles with a sense of profound possibility. Fear and hope jostle each other. Whilst some social metrics indicate major progress, others indicate major setbacks. The claim “You’ve never had it so good” coexists with the counterclaim “It’s going to be worse than ever”. To add to the bewilderment, there seems to be lots of evidence confirming both views.

The greater the pace of change, the more intense the dislocation. Due to the increased scale, speed, and global nature of the ongoing NBIC revolutions, the disruptions that followed in the wake of previous industrial revolutions – seismic though they were – are likely to be dwarfed in comparison to what lies ahead.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In short, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

The words just quoted come from the opening page of a short book that is envisioned to be published in January 2020. The chapters of this book are reworked versions of the scripts used in the recent “Technoprogressive roadmap” series of videos.

Over the next couple of weeks, all the chapters of this proposed book will be made available for review and comment:

  • As pages on the Transpolitica website, starting here
  • As shared Google documents, starting here, where comments and suggestions are welcome.

RAFT Cover 21

All being well, RAFT 2035 will also become a conference, held sometime around the middle of 2020.

You may note that, in that way that RAFT 2035 is presented to the world,

  • The word “transhumanist” has moved into the background – since that word tends to provoke many hostile reactions
  • The word “technoprogressive” also takes a backseat – since, again, that word has negative connotations in at least some circles.

If you like the basic idea of what’s being proposed, here’s how you can help:

  • Read some of the content that is already available, and provide comments
    • If you notice something that seems mistaken, or difficult to understand
    • If you think there is a gap that should be addressed
    • If you think there’s a better way to express something.

Thanks in anticipation!

  1. A reliability index for politicians? 2 Replies
  2. Technoprogressive Roadmap conf call Leave a reply
  3. Transpolitica and the TPUK Leave a reply
  4. There’s more to democracy than voting Leave a reply
  5. Superdemocracy: issues and opportunities Leave a reply
  6. New complete book awaiting reader reviews Leave a reply
  7. Q4 update: Progress towards “Sustainable superabundance” Leave a reply
  8. Q3 sprint: launch the Abundance Manifesto Leave a reply
  9. Q2 sprint: Political responses to technological unemployment Leave a reply