The Singularity Shadow

A complication: the Singularity Shadow

Why don’t more people pay more serious attention to the remarkable prospect of the emergence, possibly within just a few decades, of Artificial General Intelligence (AGI)?

Part of the reason is a set of confusions and wishful thinking which surrounds this subject.

These confusions and wishful thinking form a kind of shadow around the central concept of the Technological Singularity – a shadow which obstructs a clearer perception of the risks and opportunities that are actually the most significant.

This chapter examines that shadow – the Singularity Shadow. I describe that shadow as consisting of seven overlapping areas:

  1. Singularity timescale determinism
  2. Singularity outcome determinism
  3. Singularity hyping
  4. Singularity risk complacency
  5. Singularity term overloading
  6. Singularity anti-regulation fundamentalism
  7. Singularity preoccupation

To be clear, there is a dual problem with the Singularity Shadow:

  • People within the shadow – singularity over-enthusiasts – make pronouncements about the Singularity that are (as we will see) variously overly optimistic, overly precise, or overly vague
  • People outside the shadow – singularity over-critics – notice these instances of unwarranted optimism, precision, or vagueness, and jump to the wrong conclusion that the entire field of discussion is infected with the same flaws.

The Singularity Shadow misleads many people that you might think should know better. That shadow of confusion helps to explain why various university professors of the subject of artificial intelligence, along with people with job titles such as “Head of AI” in large companies, often make statements about the emergence of AGI that are, frankly, full of errors or deeply misleading.

Singularity timescale determinism

Singularity timescale determinism occurs when people make confident predictions that AGI will arrive by a particular date. For example, you may have heard people talking about the date 2045, or 2043.

Now it is reasonable to give a date range with an associated probability estimate for whether AGI might have emerged by that time. There are grounds for saying that it’s around 50% likely that AGI will have arrived by 2050, or that it’s around 10% likely that AGI will have arrived by 2030.

But what it’s not reasonable to do is to insist on a high probability for a narrow date range. That’s not responsible foresight. It’s more akin to a religious faith, as when bible scholars studied the pages of the Old and New Testament to predict that some kind of rapture would take place in the year 1843. By the way, the years 1843 and 1844 are sometimes called “the great disappointment”.

One basis for picking a precise date for the emergence of AGI is an extrapolation of progress of hardware performance known as Moore’s Law. The assumption is that a given amount of computer memory and computer processing power will be sufficient to duplicate what happens inside a human brain.

The first problem with that projection is that it’s not clear how long Moore’s Law can continue to hold. There’s recently been a slowdown in the rate at which smaller transistor architectures have been reached.

A bigger problem is that this projection ignores the importance of software. The performance and capability of an AI system depends crucially on its software as well as on its hardware. With the wrong software, no amount of additional hardware power can make up for inadequacies in an AI. With suitable software breakthroughs, an AI could reach AGI levels more quickly that would be predicted purely by looking at hardware trends.

In other words, AGI could arrive considerably later – or considerably earlier – than the dates such as 2045 which have received undue prominence.

Another problem with these high-probability predictions of narrow date ranges is that they ignore the risks of a societal reverse in the wake of problems caused by factors such as pathogens, wars, revolutions, economic collapses, and runaway climate change.

Yet another problem with these predictions is that they encourage a sense of fatalism amongst observers who should, instead, actively engage with projects to influence the conditions under which AGI arrives.

In summary, when people inside the Singularity Shadow display singularity timescale determinism, they damage the credibility of the whole concept of the singularity. In contrast, the Singularitarian Stance emphasises the uncertainties in the timescales for the advent of AGI.

Singularity outcome determinism

A second uncertainty which the Singularitarian Stance emphasises is the uncertainty about the consequences of the advent of Artificial General Intelligence.

That’s in contrast to the overconfidence displayed by some people in the Singularity Shadow – an overconfidence regarding these outcomes being bound to be overwhelmingly positive for humanity.

This deterministic outlook sometimes seems to be grounded in a relentless positive optimism which some people like to project. That optimism only pays attention to evidence of increasing human flourishing, and it discounts evidence of increasing human alienation and divisiveness, and evidence of greater risks being accumulated as greater power comes into the hands of angry, hostile people.

Another basis for assuming a pre-determined outcome to the singularity is the idea that a supremely intelligent software system will automatically become supremely ethical, and will therefore act with great vigour to uphold human flourishing.

But as covered in the previous chapter, there are several reasons to question that assumption. For example, we should beware the actions that might be taken by an AI that is well on the way to becoming a superintelligence, but which still contains various design shortcomings. That is, we should beware the actions of what is called an “immature superintelligence”.

Moreover, it’s by no means clear that a single universal set of “super ethics” exists, or that an AGI would agree with us humans what is the right set of ethics.

The flip side of unwarranted optimism about the singularity is unwarranted pessimism. Both these varieties of singularity outcome determinism constrict our thinking. Both are not only unhelpful but are indeed dangerous.

Instead, the Singularitarian Stance highlights the ways in which deliberate, thoughtful, coordinated actions by humans can steer the outcome of the Singularity. We can be cautiously optimistic, but only if our optimism is grounded in an ongoing realistic assessment, careful anticipation of possible scenarios, vigilant proactive monitoring for unexpected developments, and agile engagement when needed.

Singularity hyping

A third uncertainty to highlight is the uncertainty regarding the AI methods that are most likely to result in Artificial General Intelligence.

That’s in contrast to another overconfidence displayed by some people in the Singularity Shadow – overconfidence regarding how AGI can actually be built.

This is the Singularity Shadow characteristic of singularity hyping. What gets hyped is particular solutions, methods, people, or companies.

Some of this hyping is financially motivated. People wish to increase sales for particular products, or to generate publicity for particular ideas or particular people, or to encourage investment in particular companies.

Some of this hyping arises from people’s personal insecurity. They seek some kind of recognition or acclaim. They jump on a bandwagon of support for some new technique, new company, or perceived new wonderkid. They derive some personal satisfaction from being a superfan. Perhaps they are also desperately trying to convince themselves that, in a world where many things seem bleak, things will turn out well in the end.

However, hyping can cause lots of damage. People may be persuaded to squander lots of money in support of products or companies that can deliver no lasting value – money that would have been better invested into solutions that were more deserving.

Worse, solution hyping damages the credibility of the idea of the Singularity. When critics notice that Singularity over-enthusiasts are making unfounded claims, with insufficient evidence, it may increase their scepticism of the entire AGI field.

Instead, the Singularitarian Stance urges a full respect for the best methods of science and rationality, keeping an open mind, and communicating with integrity. That’s the way to increase the chance of reaching a positive singularity. But if hype predominates, it is no wonder that observers will turn away from any prolonged analysis of the Singularity.

Singularity risk complacency

Some people within the Singularity Shadow exhibit the characteristic of singularity risk complacency. That’s a willingness to overlook the risks of possible existential disaster from the Singularity, out of a fervent hope that the Singularity will arrive as quickly as possible – early enough to deliver, for example, cures for age-related diseases that would otherwise cause specific individuals to die.

These individuals know well that there are risks involved with the approach to AGI. But they prefer to avoid talking about these risks, or to offer glib reassurances about them, in order to quickly move the conversation back to positive outcomes.

These individuals don’t want to slow down any approach to the Singularity. They want humanity to reach the Singularity as quickly as possible. They see it as a matter of life and death.

These individuals want to minimise any discussions of what they call “doomsday scenarios” for the advent of AGI. If they cannot silence these discussions altogether, they want to make these scenarios appear ridiculous, so no-one will take them seriously.

What these individuals are worried about is political interference in projects to research, develop, and deploy improved AI. That interference is likely to introduce regulations and monitoring, complicating AI research, and, they fear, slowing it down.

It’s true that politicians frequently make a mess of areas where they try to introduce regulations. However, that’s not a good reason for trying to shut down arguments about potential adverse consequences from AI with ever greater capabilities. Instead, it’s a reason to improve the calibre of the public discussion about the Singularity. It’s a reason to promote the full set of ideas within the Singularity Principles.

Downplaying the risks associated with the Singularity is like the proverbial ostrich putting its head into the sand. It’s a thoroughly bad idea. What’s needed is as many eyes as possible looking into options for steering the Singularity. Even better is if the people looking are already knowledgeable about key issues that have arisen in earlier analysis. Accordingly, the Singularity Principles urge that an honest conversation takes place – not a conversation that attempts to hide or distort important information.

Singularity term overloading

Another reason why the conversation about the Singularity has become distorted is because writers have used the term “Singularity” for several conflicting ideas. This area of the Singularity Shadow can be called singularity term overloading.

The core meaning of the word “singularity”, as it is used in mathematics and physics, is when past trends break down. If these trends would continue, functions would move to have infinite values.

Associated with this is a breakdown of our ability to foresee what might happen next. That connection was made by Vernor Vinge in his 1986 Prometheus Award winning novel Marooned in Realtime that was the first to seriously explore the idea of the “Technological Singularity”. Superintelligence – an entity with significantly greater intelligence than us – is likely to be focussed on different sorts of challenges and questions than the ones that motivate and animate us.

However, alongside these two associated concepts, of the emergence of superintelligence and the profound unpredictability of what happens next, various other meanings have been attached to the term “singularity”. These other meanings detract from the core concept. As covered in the chapter “What is the Singularity”, these additional meanings include:

  • Exponential acceleration
  • Humans transcending biology
  • AIs becoming sentient (or conscious)

These concepts are all interesting in their own right. However, it’s unhelpful to use the same name for all of them.

To avoid confusion, it’s better to stick with the core concept of the emergence of entities with much greater intelligence, and therefore much greater power, than present-day humans. That’s what deserves our priority attention.

Singularity anti-regulation fundamentalism

An important part of keeping an open mind about steering the development of advanced AI is keeping an open mind about the usage of legal regulations to constrain and incentivise that development.

That’s in contrast to a sixth aspect of the Singularity Shadow: singularity anti-regulation fundamentalism.

This fundamentalism insists that governments be prevented from passing regulations which might slow down progress toward the kinds of advanced AI that could lead to the singularity.

In this view, government regulation of fast-changing technology always lags far behind the actual needs of the people creating and using that technology. Regulations might be well-intentioned, but, the claim goes, they are almost always counterproductive.

However, that’s not what history shows. Legal regulations across many industries have played important roles in improving human wellbeing. For example, legal regulations boosted the safety of motor vehicles, cutting down the rates of people dying from accidents, and preventing the use of leaded petrol which had some horrific side-effects on people’s health.

Legal regulations played an important role in steering the chemical industry away from the use of CFC chemicals in refrigeration and aerosol cans, thereby preventing the growth of the hole in the ozone layer and the associated increase in skin cancer.

Another example is the regulation of pasteurising milk, cutting back on incidents of childhood diseases which had often proved fatal. Consider also how regulation prevents advertising from making inflated claims that would mislead consumers, and how another set of regulations, this time in the banking industry, help to protect citizens from losing their savings due to reckless banking practices.

Of course, in each case, counter arguments were made at the time these regulations were introduced, and many counter arguments continue to be made to this day. Getting regulations right is a hard task, full of complexities.

But here’s what no-one would say. Regulating the nuclear power industry is hard. So let’s cancel all attempts to regulate that industry. Instead, each provider of nuclear power will from now be free to make their own decisions about cutting corners, commencing power generation before safety checks have been completed, employing personnel without adequate safety training, and disposing of nuclear waste in whatever way they happen to choose.

No, we should expect to work as hard on the appropriate regulations as we do on the technology itself. With the right frameworks in place, we can expect “many hands to make light work”. But without these frameworks, worse and worse mishaps are likely to arise as a result of AI systems that malfunction or are badly configured. And if no such frameworks exist, we should not be surprised if politicians attempt to close down the entire AGI project.

Singularity preoccupation

The seventh and final aspect of the Singularity Shadow can be called singularity preoccupation. That’s the view that shorter-term issues deserve little attention, since they pale in significance compared to the magnitude of changes arising from the technological singularity.

The view can be illustrated by a couple of vivid images that spread around social media in 2020. The first of these images, by professional cartoonist Graeme MacKay of the Hamilton Spectator, portrays an increasing cascade of threats to the wellbeing of so-called normal life. One wave is labelled “COVID-19” and is shown along with advice that seems intended to reassure people, “Be sure to wash your hands and all will be well”. Behind that first wave is a larger one labelled “Recession”, referring to economic trauma. And behind that is an even larger wave labelled “Climate Change”. This image gives a fair representation of what may be considered mainstream concern about the challenges faced by human civilization in the 2020s. The implication is that action more radical than mere hand-washing will be required.

The second image was created by German foresight blogger Alexander Kruel under his Twitter account Xi Xi Du. It aims to provide additional perspective, beyond the concerns of mainstream culture. Two even larger waves bear down upon humanity. Dwarfing the threats from the first three waves is a wave representing bioterrorism. But dwarfing even that is a wave representing artificial intelligence.

This second image is thought-provoking. But it’s important to resist one dangerous conclusion that is sometimes drawn from this line of thinking.

That dangerous conclusion is that any effort spent on addressing potential existential threats such as runaway climate change or the spread of WMDs – weapons of mass destruction – is a distraction from effort that should be spent instead on managing the advent of superintelligence. In that analysis, superintelligence will produce comprehensive solutions to any threats such as runaway climate change or the spread of WMDs.

There are four reasons that conclusion is dangerous.

First, these existential threats could cause the collapse of society before we reach the level of creating AGI. Each of these threats poses serious risks in its own right. It is in combination that the greater challenges arise.

Second, even if these existential threats don’t actually destroy human civilization ahead of the emergence of AGI, they could cause such a set of political struggles that the social and academic environment in which AGI is developed would be damaged and increasingly dysfunctional. That raises the chance that, when AGI emerges, it will be flawed, leading in turn to the collapse of human civilization.

Third, even if these existential threats don’t destroy human civilization, either directly (case one above) or indirectly (case two above), they could still kill or maim many millions or even billions of humans.

Accordingly, human society needs to take actions to address two different timescales. In the short-term, action is needed to lessen the chance of any of these existential threats from detonating (literally or metaphorically). In the slightly longer term, action is also needed regarding particular risks and opportunities associated with the emergence of AGI.

If that sounds like double the amount of work, that’s not exactly true. The same set of methods, namely the Singularity Principles, address both these sets of issue in parallel.

Fourth, any apparent disregard by Singularity enthusiasts for such a set of humanitarian setbacks en route to the creation of AGI will, with good reason, cause revulsion by many observers against the community of people who take the idea of the Singularity seriously.

In other words, as with all the other characteristic elements of the Singularity Shadow, the associated attitude will drive many people away from giving the Singularity the attention it requires.

Looking forward

The various components of the Singularity Shadow all can, and should, be dispelled. This can happen as understanding spreads about the content of the Singularity Principles.

The result will be that the core nature of the Singularity will be seen much more clearly:

  • It will become easier for everyone to perceive the risks and opportunities about the emergence of Artificial Superintelligence that are actually the most important.
  • It will also become easier for everyone to appreciate the actions that need to be taken to raise the probability of beneficial AGI as opposed to destructive AGI.

Nevertheless, various unhelpful human psychological tendencies still need to be countered – tendencies that lead people, despite all the evidence, to seek to deny the importance of the concept of the Singularity. That’s the subject of the next chapter.


Note: The following video from the Vital Syllabus contains a visual illustration of the Singularitarian Shadow. (Some aspects of the description of this shadow have evolved since the video was originally recorded.)

Recent Posts

RAFT 2035 – a new initiative for a new decade

The need for a better politics is more pressing than ever.

Since its formation, Transpolitica has run a number of different projects aimed at building momentum behind a technoprogressive vision for a better politics. For a new decade, it’s time to take a different approach, to build on previous initiatives.

The planned new vehicle has the name “RAFT 2035”.

RAFT is an acronym:

  • Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

RAFT is also a metaphor. Here’s a copy of the explanation:

When turbulent waters are bearing down fast, it’s very helpful to have a sturdy raft at hand.

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities – enormous opportunities and enormous risks:…

Rapid technological change tends to provoke a turbulent social reaction. Old certainties fade. New winners arrive on the scene, flaunting their power, and upturning previous networks of relationships. Within the general public, a sense of alienation and disruption mingles with a sense of profound possibility. Fear and hope jostle each other. Whilst some social metrics indicate major progress, others indicate major setbacks. The claim “You’ve never had it so good” coexists with the counterclaim “It’s going to be worse than ever”. To add to the bewilderment, there seems to be lots of evidence confirming both views.

The greater the pace of change, the more intense the dislocation. Due to the increased scale, speed, and global nature of the ongoing NBIC revolutions, the disruptions that followed in the wake of previous industrial revolutions – seismic though they were – are likely to be dwarfed in comparison to what lies ahead.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In short, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

The words just quoted come from the opening page of a short book that is envisioned to be published in January 2020. The chapters of this book are reworked versions of the scripts used in the recent “Technoprogressive roadmap” series of videos.

Over the next couple of weeks, all the chapters of this proposed book will be made available for review and comment:

  • As pages on the Transpolitica website, starting here
  • As shared Google documents, starting here, where comments and suggestions are welcome.

RAFT Cover 21

All being well, RAFT 2035 will also become a conference, held sometime around the middle of 2020.

You may note that, in that way that RAFT 2035 is presented to the world,

  • The word “transhumanist” has moved into the background – since that word tends to provoke many hostile reactions
  • The word “technoprogressive” also takes a backseat – since, again, that word has negative connotations in at least some circles.

If you like the basic idea of what’s being proposed, here’s how you can help:

  • Read some of the content that is already available, and provide comments
    • If you notice something that seems mistaken, or difficult to understand
    • If you think there is a gap that should be addressed
    • If you think there’s a better way to express something.

Thanks in anticipation!

  1. A reliability index for politicians? 2 Replies
  2. Technoprogressive Roadmap conf call Leave a reply
  3. Transpolitica and the TPUK Leave a reply
  4. There’s more to democracy than voting Leave a reply
  5. Superdemocracy: issues and opportunities Leave a reply
  6. New complete book awaiting reader reviews Leave a reply
  7. Q4 update: Progress towards “Sustainable superabundance” Leave a reply
  8. Q3 sprint: launch the Abundance Manifesto Leave a reply
  9. Q2 sprint: Political responses to technological unemployment Leave a reply