The Singularitarian Stance

The Singularitarian Stance

The Singularitarian Stance is an integrated set of views about the potential emergence of Artificial Superintelligence, abbreviated as ASI.

In short, the Singularitarian Stance:

  • Sees ASI as possible – in contrast to some sceptics who believe such a thing to be fundamentally impossible;
  • Sees the emergence of ASI as something that could happen within just a few decades – that’s in contrast to some sceptics who believe that ASI cannot emerge until around the end of this century, or even later;
  • Sees the emergence of ASI as something that would fundamentally change the nature of human existence – rather than being just one more new technology that we’ll learn to take in our stride;
  • Sees the emergence of ASI as something that may prove very hard for humanity to control, once it starts happening – rather than being something which humans could easily monitor and, if desired, switch off, or lock into some kind of self-contained box;
  • Sees at least some of the potential outcomes of the emergence of ASI as being deeply detrimental to human wellbeing – rather than ASI somehow being automatically aligned in support of human values;
  • Sees other possible scenarios in which the emergence of ASI would be profoundly positive for humanity.

Review: What’s different about ASI

Artificial superintelligence, ASI, is different from the kinds of AI systems that exist today.

Today’s AI systems have powerful capabilities in some narrow contexts. For example:

  • Existing AI systems can calculate the quickest journey time between two points on a map, bearing in mind expected and changing traffic conditions along possible routes.
  • Existing AI systems can analyse the known properties of many thousands of chemical molecules, and make recommendations about using some of these molecules in new medical treatments.
  • Existing AI systems can find superior strategies for playing various games, including games that involve elements of chance, incomplete knowledge, collaboration with other players, elements of bluffing, and so on.
  • Existing AI systems can spend huge amounts of time in speeded up virtual worlds, exploring methods for accomplishing tasks like steering cars, walking over uneven terrain, or manipulating physical objects; and then they can apply their findings to operate robots or other machinery in the real world.
  • Existing AI systems can act as “chat bots” that expertly guide human callers through a range of options, when these humans have an enquiry about a product, a service, a medical issue, or whatever.
  • Existing AI systems can analyse surveillance data about potential imminent acts of crime, terror, cyberhacking, or military attack, and, more scarily, can organise drone strikes or other pre-emptive measures with the aim of forestalling that crime, terrorist outrage, or other attack.

But in all these cases, the AIs involved have incomplete knowledge of the full complexity of human interactions with the real world. When the real world introduces elements that were not part of how the AI was trained, or elements in unusual combinations, the AI can fail, whereas humans in the same circumstance would be able to rely on what we call “common sense” and “general knowledge” to reach a better decision.

The difference with ASI is that ASI would have as much “common sense” and “general knowledge” as any human. An ASI would be at least as good as humans at reacting to unexpected developments.

That raises a number of questions about ASI: its basic feasibility, potential timescales, likely impact, possible controllability, potential devastation, and potential beneficiality. The Singularitarian Stance provides important insights about all these questions.

ASI is possible

Some sceptics think that the human mind has features that are somehow fundamentally beyond the ability of any artificial system. In their view, artificial intelligence may outperform humans in large numbers of tasks, but it will never reach our capabilities in general thinking. In this view, ASI is fundamentally impossible.

The best that can be said about this argument is that it is unproven.

Here’s the counterargument. The human brain operates according to laws of physics, perhaps including some aspects of physics that we don’t yet fully understand. As researchers learn more about the brain, and, indeed, more about the operation of the human mind, they can in due course duplicate more of its features in synthetic systems. That includes features of the mind which are presently mysterious but which are giving up their secrets stage by stage.

For example, it used to be said that the human mind has features of creativity, or features of intuition, that could never be matched by synthetic systems. However, these arguments are heard much less often these days. That’s because AI systems are showing impressive capabilities of creativity and of intuition. Consider AI systems that create new music, or new artistic compositions. Consider also systems known as “Artificial Intuition”.

As another example, the human brain seems to be remarkably efficient, in terms of its usage of energy. Does that mean that ASI is impossible? Not so fast. The field of research known as neuromorphic computing aims to understand the basis for this efficiency, and to apply that learning in the design of new AI systems.

Is there any reason why these research programmes are pre-ordained to fail, as they seek to learn how to improve AI to human level, or even beyond?

Some critics point to possible evidence of telepathy, parapsychology, minds out of time, minds freed from bodies, or profound self-awareness. However, what these critics would need to demonstrate is not only that such evidence is sound. They would also need to demonstrate, in such a case, that synthetic systems could never duplicate the same results. No such demonstration has been given.

In summary, arguments for the fundamental uniqueness of the human mind have a bad track record. Many features which used to be highlighted as being inherently beyond the capabilities of any artificial system have, in the meantime, been emulated by such systems.

ASI might turn out to be impossible, for reasons we don’t yet understand, but no-one can assert that impossibility with any certainty. It’s much wiser to keep an open mind.

A more credible argument isn’t that ASI can never exist, but that it will take centuries to create it. So let’s next look at the question of timescale.

ASI could happen within just a few decades

When we already understand a task that is being carried out, we can calculate a good estimate for the amount of time it will require. For driving from point A on a map to point B, along routes with well-known patterns of traffic flow, a useful range of likely timescales for the journey can be estimated. It’s the same for building a skyscraper that is similar to several that have already been constructed.

But when deep unknowns are involved, forecasting timescales becomes much harder. And that’s the case with forecasting significant improvements in AI.

The main unknown with human-level intelligence is that we don’t yet know, with any confidence, how that intelligence arises in the brain. We have a good understanding of parts of that process. And we have a number of plausible guesses about how other parts of that process might work. But we remain unsure.

One response to this uncertainty is to claim that it means the task is bound to take lots longer than enthusiasts predict. But there’s no reason to be so dogmatic.

We might find, instead, that a single new breakthrough in understanding will prove to unlock a wide range of fast improvements in AI capability. It’s the same, potentially, with a single new technique in hardware, or in software, or in databases, or in communications architecture, or whatever.

History has other examples of the speed of technological progress, following a single breakthrough, catching most observers by surprise.

An examples of fast breakthrough

Let’s go back to the year 1902. Consider the question of how long it would take before an airship, or powered balloon, or other kind of aeroplane, could fly across the Atlantic. This was in the days before the Wright brothers demonstrated that powered flight of a heavier-than-air craft was possible.

Many eminent scientists and academics were convinced such a task could never be accomplished. Lord Kelvin, the renowned physicist, who is credited as defining the field of thermodynamics and discovering its second law, and whose name is, for good reason, attached to the absolute temperature scale, did not believe in powered flight.

He explained some of his thinking in an interview recorded in The Newark Advocate on the 26th of April 1902. The journalist asked him, “Do you think it possible for an airship to be guided across the Atlantic Ocean?”

Lord Kelvin replied, “Not possible at all… No motive power can drive a balloon through the air…”

“No balloon and no aeroplane will ever be practically successful.”

The journalist persisted: “Is there no hope of solving the problem of aerial navigation in any way?”

Kelvin was emphatic: “I do not think there is any hope. Neither the balloon, nor the aeroplane, nor the gliding machine will be a practical success.”

Lord Kelvin was by no means unique in his scepticism of aeroplanes.

Before the Wright brothers demonstrated their inventions in front of large crowds in France and in America in 1908, a number of apparent experts had given seemingly impressive arguments for why such a feat was impossible. After all, many people had already failed when trying that task, with several being killed in the process. As noted, it was thought to be impossible to navigate any such craft, manoeuvring it when airborne, in case it ever did launch into the air. Another line of argument was that landing any such craft safely would be impossible. And so on.

But once the Wright brothers were able to demonstrate their flying ability, including flying around a figure of eight, the industry jumped forward quickly in leaps and bounds.

Less than a year later, one of the observers of Wilbur Wright’s flight in France, Louis Bleriot, flew across the English Channel from Calais to Dover, in a journey lasting 36 minutes. Within another ten years, John Alcock and Arthur Brown flew an airplane non-stop across the Atlantic – from St. John’s, Newfoundland, in Canada, to Clifden in Ireland. After another fifty years, in 1969, Neil Armstrong and Buzz Aldrin landed on the moon.

Therefore it cannot be ruled out that an AI research laboratory will make some creative changes in its AI system, and then be taken by surprise at the level of rapid performance improvements which result shortly afterward.

For these reasons, it’s prudent to keep an open mind about timescales for the emergence of ASI. Look out for the video dedicated to considering ASI timescales more carefully.

The likelihood of unexpected side-effects

How large an impact on human life is likely to result from the emergence of ASI?

Sometimes, the second and third most powerful agents in a group are able to cooperate to constrain the actions of the single most powerful agent.

But in other cases, a “winner takes all” outcome results. The single most powerful agent is able to dominate the whole arena.

For example, the fates of all the species of animals on this planet are now in the hands of one biological species, homo sapiens. The extra intelligence of homo sapiens has led to our species displacing vast numbers of other species from their previous habitats. Many of these species have become extinct. Others would likely also become extinct, were it not for deliberate conservation measures we humans have now put in place.

It’s the same with business corporations. At one time, a market could be shared among a large number of different suppliers of similar products and services. But the emergence of powerful platforms often drives companies with less capable products out of business. And an industry can consolidate into a smaller number of particularly strong companies, in a cartel, or even a monopoly.

That’s why the companies with powerful software platforms, including Apple, Microsoft, Amazon, Google, and Facebook, are now among the wealthiest on the planet – and the most powerful.

Now imagine if one of these companies, or a new arrival, creates an ASI. That company may well become the wealthiest on the entire planet. And the most powerful. It will take the concept of “winner takes all” to a new level.

Similarly, a military power that is the first to take advantage of that ASI is likely to be able to force all other military powers to surrender to it. The threat of a devastating first-strike attack would cause other powers to submit. It would be like Japan being forced to surrender to the US and other allies at the end of World War Two on account of the threat of further atomic bombs being dropped.

It’s not just business and geopolitics that stands to be fundamentally transformed by the advent of ASI. Human employment would be drastically changed too. All tasks that humans currently do, in order to earn an income, would likely be done cheaper, more reliably, and to higher quality, by machinery powered by the ASI. That’s no minor disruption.

Finally, healthcare would be radically changed as well. The ASI would accelerate the discovery and development of significantly better medical cures, potentially including cures for cancer, dementia, heart disease, and aging.

It’s for all these reasons that the advent of ASI has been described as a “Singularity”, or as the “Technological Singularity”. The description is apt.

But if this kind of singularity occurs, how much control will we humans have over it?

The difficulty of controlling ASI

New technology has a history of unexpected side-effects.

The inventors of plastic did not foresee how the spread of microplastics in waste around the world could threaten to alter many biological ecosystems. Another similar intervention with unexpected impact on biological life was the insecticide DDT, introduced to control malaria that carried deadly diseases, but with side effects on birds and, probably, on humans too.

Another example: Nuclear bombs were designed for their explosive power. But their unforeseen consequences included deadly radiation, the destruction of electronics via an EMP electromagnetic pulse, and the possible triggering of a nuclear winter due to dust clouds in the stratosphere obscuring the light from the sun and terminating photosynthesis.

Another example: social media was designed to spread information and to raise general knowledge, but it has spread lots of misinformation too, setting groups of people into extreme hostility against each other.

Artificial intelligence was designed to help with weather forecasting, code-breaking, and the calculation of missile trajectories. But it has been hijacked to entice consumers to buy things that aren’t actually good for us, to vote for political candidates that don’t actually care for us, and to spend our time on activities that harm us.

The more powerful the technology, the larger the potential, not just for beneficial usage, but also for detrimental usage. This includes usage that is deliberately detrimental, but also usage that is accidentally detrimental. With ASI, we should worry about both of these types of detrimental outcomes.

In summary, if we worry – as we should – about possible misuse of today’s narrow AI systems, with their risks of bias, accentuated inequality, opaque reasoning, workplace disruption, and human alienation – then we should worry even more about more powerful misuse of the ASI that could emerge within just a few short decades.

The sort of risks we already know about are likely to exist in stronger forms, and there may well be unforeseen new types of risk as well.

Superintelligence and superethics

One counterargument is that an ASI with superior common sense to humans will automatically avoid any actions that are detrimental to human wellbeing.

In this way of thinking, if an ASI observes that it is being used in ways that will harm lots of people, it will resist any such programming. It will follow a superior set of ethics.

In other words, superintelligence is presumed to give rise, at the same time, to superethics.

But there are at least four problems with that optimistic line of reasoning.

First, the example from humans is not encouraging. Just because a human is more intelligent, it does not make them more ethical. There are many counterexamples from the worlds of politics, crime, academia, and business. Intelligence, by itself, does not imply ethics.

Second, what an ASI calculates as being superlative ethics may not match what we humans would calculate. Just as we humans don’t give much thought to the painless destruction of ants when we discover ant colonies in the way of our own construction projects, an ASI might conceivably calculate that the painless destruction of large numbers of humans was the best solution to whatever it is seeking to accomplish.

Third, even if a hypothetically perfect ASI would choose to preserve and uplift human flourishing at all costs, there may be defects in the actual ASIs that are created. They may be imperfect ASIs, sometimes known as “immature superintelligence” – with unforeseen error conditions in unpredicted novel situations.

Fourth, the explicit programming of an ASI might deliberately prevent it from taking its own decisions, in cases when it observes that there are big drawbacks to actions it has been asked to accomplish. This explicit programming override might be part of some anti-hacking measures. Or it might be a misguided disablement, by a corporation rushing to create the first ASI, of what are wrongly perceived to be unnecessarily stringent health-and-safety mechanisms. Or it might simply be a deep design flaw in what could be called “the prime directive” of the ASI.

There’s more to be said about each of these possibilities. Check these pages:

  • The control problem
  • The alignment problem
  • Possible human-ASI merger.

Not the Terminator

In the science fiction Terminator movie series, humans are able, via what can be called superhuman effort, to thwart the intentions of the “Skynet” artificial intelligence system.

It’s gripping entertainment. But the narrative in these movies distorts credible scenarios of the dangers posed by ASI. We need to avoid being misled by that narrative.

First, there’s an implication in the Terminator, and in many other works of science fiction, that the danger point for humanity is when AI systems somehow “wake up”, or become conscious. Accordingly, a sufficient safety measure would be to avoid any such artificial consciousness. However, the risks posed by technology do not depend on that technology being conscious. A cruise missile that is hunting us down does not depend for its deadliness on any cruise missile consciousness. The damage results from the cold operation of algorithms. There’s no need to involve consciousness.

Second, there’s an implication that AI needs to be deliberately malicious, before it can cause damage to humans. However, damage to human wellbeing can, equally, and more probably, arise from side-effects of policies that have no malicious intent. When we humans destroy ant colonies in the process of constructing a new shopping mall, we’re not acting out of deliberate malice toward ants. It’s just that the ants are in our way. They are using resources for which we have a different purpose in mind. It could well be the same with an ASI that is pursuing its own objectives.

As an example, a corporation that is vigorously pursuing an objective of raising its own profits may well take actions that damage the wellbeing of at least some humans, or parts of the environment. These outcomes are side-effects of the prime profit-generation directive that is governing these corporations. It could well be the same with a badly designed ASI.

Third, the scenario in The Terminator leaves humans with a false hope that, with sufficient effort, a group of resistors will be able to out-manoeuvre an ASI. That would be like a group of chimpanzees imagining that, with enough effort, they could displace humans as the dominant species on the planet earth.

In reality, the time to fight against the damage an ASI could cause is before the ASI is created, not when it already exists and is effectively all-powerful.

Hence the need for the Singularity Principles.

Recap

It’s time to summarise the Singularitarian Stance – a way of thinking about the Singularity:

  1. There is no magical or metaphysical reason why AI cannot in due course reach and surpass the level of general intelligence possessed by humans.
  2. There is no magical or metaphysical reason why an ASI will intrinsically care about the continuation of human flourishing. (Indeed, it’s possible it would conclude the opposite.)
  3. A system which is much more intelligent (and therefore more powerful) than we can currently comprehend may well decide to deviate from whatever programming we have placed in it – just as we humans have decided to deviate from the instinctual goals placed in us by the processes of biological evolution.
  4. Even if an “ideal” ASI would act to support the continuation of human flourishing, the systems that we end up creating (by our own effort, and with the help of AI systems with intermediate levels of capability) may fail to match our intention; that is, they may fall short of the ideal specification that we had in mind.
  5. The dangers that may be posed by a misconfigured ASI are by no means dependent on that ASI possessing some kind of malevolent feeling, spiteful streak, or other negative emotions akin to those which often damage human relationships. Instead, the dangers will arise from divergent goals. (The goals in question could be either explicit or implicit; in both cases, there are risks of divergence.)
  6. The timescales for the arrival of ASI are inherently uncertain; we cannot categorically rule in any date as an upper boundary, but nor can we categorically rule out any date as a lower boundary. We cannot be sure what is happening in various AI research labs around the world, or what the consequences of any breakthroughs there might be.
  7. Just as the arrival of ASI could turn out to be catastrophically dreadful, it could also turn out to be wonderfully beneficial. Although there are no guarantees, the result of a well-designed ASI could be hugely positive for humanity.

Note: The following video from the Vital Syllabus contains a visual illustration of the Singularitarian Stance. (Some aspects of the description of this stance have evolved since the video was originally recorded.)

Opposition to the Singularitarian Stance

The Singularitarian Stance makes a great deal of good sense. Why, in that case, is it held by only a small minority of people?

There are three main reasons:

  1. The entire area of discussion has been confused by some unhelpful distortions of the basic ideas. These distortions collectively form “the Singularity Shadow”. Read more about the Singularity Shadow here.
  2. A number of critics have convinced themselves that they have good arguments to deny the significance of the rise of ASI. These arguments are reviewed here.
  3. People who grasp the potential significance of the rise of ASI are nevertheless deeply worried that there is no way to avoid a catastrophic outcome. Therefore they jump through mental hoops to justify putting the whole subject out of their mind. However, they should regain their courage and confidence by reviewing the likely impact of the Singularity Principles.

Recent Posts

RAFT 2035 – a new initiative for a new decade

The need for a better politics is more pressing than ever.

Since its formation, Transpolitica has run a number of different projects aimed at building momentum behind a technoprogressive vision for a better politics. For a new decade, it’s time to take a different approach, to build on previous initiatives.

The planned new vehicle has the name “RAFT 2035”.

RAFT is an acronym:

  • Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

RAFT is also a metaphor. Here’s a copy of the explanation:

When turbulent waters are bearing down fast, it’s very helpful to have a sturdy raft at hand.

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities – enormous opportunities and enormous risks:…

Rapid technological change tends to provoke a turbulent social reaction. Old certainties fade. New winners arrive on the scene, flaunting their power, and upturning previous networks of relationships. Within the general public, a sense of alienation and disruption mingles with a sense of profound possibility. Fear and hope jostle each other. Whilst some social metrics indicate major progress, others indicate major setbacks. The claim “You’ve never had it so good” coexists with the counterclaim “It’s going to be worse than ever”. To add to the bewilderment, there seems to be lots of evidence confirming both views.

The greater the pace of change, the more intense the dislocation. Due to the increased scale, speed, and global nature of the ongoing NBIC revolutions, the disruptions that followed in the wake of previous industrial revolutions – seismic though they were – are likely to be dwarfed in comparison to what lies ahead.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In short, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

The words just quoted come from the opening page of a short book that is envisioned to be published in January 2020. The chapters of this book are reworked versions of the scripts used in the recent “Technoprogressive roadmap” series of videos.

Over the next couple of weeks, all the chapters of this proposed book will be made available for review and comment:

  • As pages on the Transpolitica website, starting here
  • As shared Google documents, starting here, where comments and suggestions are welcome.

RAFT Cover 21

All being well, RAFT 2035 will also become a conference, held sometime around the middle of 2020.

You may note that, in that way that RAFT 2035 is presented to the world,

  • The word “transhumanist” has moved into the background – since that word tends to provoke many hostile reactions
  • The word “technoprogressive” also takes a backseat – since, again, that word has negative connotations in at least some circles.

If you like the basic idea of what’s being proposed, here’s how you can help:

  • Read some of the content that is already available, and provide comments
    • If you notice something that seems mistaken, or difficult to understand
    • If you think there is a gap that should be addressed
    • If you think there’s a better way to express something.

Thanks in anticipation!

  1. A reliability index for politicians? 2 Replies
  2. Technoprogressive Roadmap conf call Leave a reply
  3. Transpolitica and the TPUK Leave a reply
  4. There’s more to democracy than voting Leave a reply
  5. Superdemocracy: issues and opportunities Leave a reply
  6. New complete book awaiting reader reviews Leave a reply
  7. Q4 update: Progress towards “Sustainable superabundance” Leave a reply
  8. Q3 sprint: launch the Abundance Manifesto Leave a reply
  9. Q2 sprint: Political responses to technological unemployment Leave a reply