The Singularity Shadow

A complication: the Singularity Shadow

Why don’t more people pay more serious attention to the remarkable prospect of the emergence, possibly within just a few decades, of Artificial Superintelligence, ASI?

Part of the reason is a set of confusions and wishful thinking which surrounds this subject.

These confusions and wishful thinking form a kind of shadow around the central concept of the Technological Singularity – a shadow which obstructs a clearer perception of the risks and opportunities that are actually the most significant.

This page examines that shadow – the Singularity Shadow.

It’s a shadow that misleads many people that you might think should know better. That shadow of confusion helps to explain why various university professors of the subject of artificial intelligence, along with people with job titles such as “Head of AI” in large companies, often make statements about the emergence of ASI that are, frankly, full of errors or deeply misleading.

Two basic mistakes

Let’s start with a couple of straightforward mistakes in which people tend to jump to wrong conclusions.

First, people sometimes say that they cannot imagine any way in which a particular aspect of human general intelligence could be duplicated in an AI system. To these critics, aspects of human general intelligence appear fundamentally baffling.

That honest expression of bafflement is to be admired. What is not to be admired, however, is when someone says that, because they cannot imagine any solution, therefore no such solution can ever exist. They are constraining the future by their own limited imagination.

It’s similar to the reaction of an audience to seeing a magician perform a dramatic conjuring trick. In such cases, we sometimes cannot believe our eyes. We cannot imagine how the trick could be performed. But it would be very wrong for us to conclude that the magician really does possess magical abilities, such as making objects miraculously disappear from one location and reappear in another. In reality, once we have learned the secret of the trick, we may still admire the cunning of the magician and their skills in manipulating objects with their fingers, but the sense of profound wonder dissolves. It may well be the same with aspects of human general intelligence that presently remain somewhat mysterious. When we eventually understand how they work, we’ll lose our sense of bafflement.

Second, people sometimes say that, if the rate of progress continues as at present, it will take around one hundred years before a particular problem in implementing ASI can be solved. The mistaken conclusion is to say that, therefore, it will take one hundred years before ASI can exist. This ignores the possibility of a substantial speed-up in research. Speed ups of that sort often happen, when more resources are applied to a field, or when a different conceptual approach is found to be more productive.

What these two examples show is that we should beware of being too confident in making assertions about the timescales for the emergence of ASI. That brings us to the first aspect of the Singularity Shadow.

Singularity timescale determinism

Singularity timescale determinism occurs when people make confident predictions that ASI will arrive by a particular date. For example, you may have heard people talking about the date 2045, or 2043.

Now it is reasonable to give a date range with an associated probability estimate for whether ASI might have emerged by that time. There are grounds for saying that it’s around 50% likely that ASI will have arrived by 2050, or that it’s around 10% likely that ASI will have arrived by 2030.

But what it’s not reasonable to do is to insist on a high probability for a narrow date range. That’s not responsible foresight. It’s more akin to a religious faith, as when bible scholars studied the pages of the Old and New Testament to predict that some kind of rapture would take place in the year 1843. By the way, the years 1843 and 1844 are sometimes called “the great disappointment”.

One basis for picking a precise date for the emergence of ASI is an extrapolation of progress of hardware performance known as Moore’s Law. The assumption is that a given amount of computer memory and computer processing power will be sufficient to duplicate what happens inside a human brain.

The first problem with that projection is that it’s not clear how long Moore’s Law can continue to hold. There’s recently been a slowdown in the rate at which smaller transistor architectures have been reached.

A bigger problem is that this projection ignores the importance of software. The performance and capability of an AI system depends crucially on its software as well as on its hardware. With the wrong software, no amount of additional hardware power can make up for inadequacies in an AI. With suitable software breakthroughs, an AI could reach ASI levels more quickly that would be predicted purely by looking at hardware trends.

In other words, ASI could arrive considerably later – or considerably earlier – than the dates such as 2045 which have received undue prominence.

A final problem with these high-probability predictions of narrow date ranges is that they ignore the risks of a societal reverse in the wake of problems caused by factors such as pathogens, wars, revolutions, economic collapses, and runaway climate change.

In summary, when people inside the Singularity Shadow display singularity timescale determinism, they damage the credibility of the whole concept of the singularity. In contrast, the Singularitarian Principles emphasise the uncertainties in the timescales for the advent of artificial superintelligence.

Singularity outcome determinism

A second uncertainty which the Singularitarian Principles emphasise is the uncertainty about the consequences of the advent of artificial superintelligence.

That’s in contrast to the overconfidence displayed by some people in the Singularity Shadow – an overconfidence regarding these outcomes being bound to be overwhelmingly positive for humanity.

This deterministic outlook sometimes seems to be grounded in a relentless positive optimism which some people like to project. That optimism only pays attention to evidence of increasing human flourishing, and it discounts evidence of increasing human alienation and divisiveness, and evidence of greater risks being accumulated as greater power comes into the hands of angry, hostile people.

Another basis for assuming a pre-determined outcome to the singularity is the idea that a supremely intelligent software system will automatically become supremely ethical, and will therefore act with great vigour to uphold human flourishing.

But as covered in the page on the Singularitarian Stance, there are several reasons to question that assumption. For example, we should beware the actions that might be taken by an AI that is well on the way to becoming a superintelligence, but which still contains various design shortcomings. That is, we should beware the actions of what is called an “immature superintelligence”.

Moreover, it’s by no means clear that a single universal set of “super ethics” exists, or that an ASI would agree with us humans what is the right set of ethics.

The flip side of unwarranted optimism about the singularity is unwarranted pessimism. Both these varieties of singularity outcome determinism constrict our thinking. Both are not only unhelpful but are indeed dangerous.

Instead, the Singularitarian Principles highlight the ways in which deliberate, thoughtful, coordinated actions by humans can steer the outcome of the Singularity. We can be cautiously optimistic, but only if our optimism is grounded in an ongoing realistic assessment, careful anticipation of possible scenarios, vigilant proactive monitoring for unexpected developments, and agile engagement when needed.

Singularity hyping

A third uncertainty to highlight is the uncertainty regarding the AI methods that are most likely to result in Artificial Superintelligence.

That’s in contrast to another overconfidence displayed by some people in the Singularity Shadow – overconfidence regarding how ASI can actually be built.

This is the Singularity Shadow characteristic of singularity hyping. What gets hyped is particular solutions, methods, people, or companies.

Some of this hyping is financially motivated. People wish to increase sales for particular products, or to generate publicity for particular ideas or particular people, or to encourage investment in particular companies.

Some of this hyping arises from people’s personal insecurity. They seek some kind of recognition or acclaim. They jump on a bandwaggon of support for some new technique, new company, or perceived new wonderkid. They derive some personal satisfaction from being a superfan. Perhaps they are also desperately trying to convince themselves that, in a world where many things seem bleak, things will turn out well in the end.

However, hyping can cause lots of damage. People may be persuaded to squander lots of money in support of products or companies that can deliver no lasting value – money that would have been better invested into solutions that were more deserving.

Worse, solution hyping damages the credibility of the idea of the Singularity. When critics notice that Singularity enthusiasts are making unfounded claims, with insufficient evidence, it may increase their scepticism of the entire idea of Artificial Superintelligence.

Instead, the Singularity Principles urge a full respect for the best methods of science and rationality, keeping an open mind, and communicating with integrity. That’s the way to increase the chance of reaching a positive singularity. But if hype predominates, it is no wonder that observers will turn away from any serious attention to the Singularity.

Singularity risk complacency

Some people within the Singularity Shadow exhibit the characteristic of singularity risk complacency. That’s a willingness to overlook the risks of possible existential disaster from the Singularity, out of a fervent hope that the Singularity will arrive as quickly as possible – early enough to deliver, for example, cures for age-related diseases that would otherwise cause specific individuals to die.

These individuals know well that there are risks involved with the approach to ASI. But they prefer not to talk about these risks, or to offer glib reassurances about them, in order to quickly move the conversation back to positive outcomes.

These individuals don’t want to slow down any approach to the Singularity. They want humanity to reach the Singularity as quickly as possible. They see it as a matter of life and death.

These individuals want to minimise any discussions of what they call “doomsday scenarios” for the advent of ASI. If they cannot silence these discussions altogether, they want to make these scenarios appear ridiculous, so no-one will take them seriously.

What these individuals are worried about is political interference in projects to research, develop, and deploy improved AI. That interference is likely to introduce regulations and monitoring, complicating AI research, and, probably, slowing it down.

It’s true that politicians frequently make a mess of areas where they try to introduce regulations. However, that’s not a good reason for trying to shut down arguments about potential adverse consequences from AI with ever greater capabilities. Instead, it’s a reason to improve the calibre of the public discussion about the Singularity. It’s a reason to promote the full set of ideas within the Singularity Principles.

Downplaying the risks associated with the Singularity is like the proverbial ostriches putting their heads into the sand. It’s a thoroughly bad idea. What’s needed is as many eyes as possible looking into the possibilities of the Singularity. Even better is if the people looking are already familiar with key issues that have arisen in earlier research. Accordingly, the Singularity Principles urge that an honest conversation takes place. Not one that attempts to hide or distort important information.

Singularity term overloading

One reason why the conversation about the Singularity has become distorted is because writers have used the term “Singularity” for several conflicting ideas. This area of the Singularity Shadow can be called singularity term overloading.

The core meaning of the word “singularity”, as it is used in mathematics and physics, is when past trends break down. If these trends would continue, functions would move to have infinite values.

Associated with this is a breakdown of our ability to foresee what might happen next. That connection was made by mathematician and science fiction author Vernor Vinge in his 1986 Prometheus Award winning novel Marooned in Realtime that first introduced the term “Technological Singularity”. Entities with significantly greater intelligence than us are likely to be focussed on different sorts of challenges and questions than the ones that motivate and animate us.

In an article he wrote for the January 1983 edition of Omni Magazine, Vinge had put it like this:

We are at the point of accelerating the evolution of intelligence itself… We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the centre of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science fiction writers. It makes realistic extrapolation to an interstellar future impossible.

However, alongside these two associated concepts, of the emergence of superintelligence and the unpredictability of what happens next, two other meanings have been attached to the term “singularity”. These other meanings detract from the other concept.

The first of these additional meanings is that of exponential acceleration. The institute with the name “Singularity University” mainly runs courses on exponential growth and on how companies can disrupt competitors by switching to new technological platforms. As has been said, the Singularity University is neither an actual accredited university, nor is it concerned with the Singularity.

The other unhelpful usage of ther term “singularity” is on the cover of the 2005 book by inventor and writer Ray Kurzweil, The Singularity is Near. A subtitle provides this clarification: “When humans transcend biology”. The book retells the history of the cosmos in six epochs: Physics and chemistry, Biology, Brains, Technology, The merger of technology with biology, and, sixth, The universe wakes up.

Whether a merger between humans and ASI is a sufficient response to the challenges of the singularity is the subject of its own page in the Singularity Principles. For now, it can simply be said that such a merger could produce very bad results, unless major progress takes place, beforehand, on questions of values and ethics. These questions arise from the existence of entities with much greater intelligence, and therefore much greater power, than present-day humans.

It’s the emergence of that much greater intelligence that deserves our priority attention.

Singularity anti-regulation fundamentalism

An important part of keeping an open mind about steering the development of advanced AI is keeping an open mind about the usage of legal regulations to constrain and incentivise that development.

That’s in contrast to a sixth aspect of the Singularity Shadow: singularity anti-regulation fundamentalism.

This fundamentalism insists that governments be prevented from passing regulations which might slow down progress toward the kinds of advanced AI that could lead to the singularity.

In this view, government regulation of fast-changing technology always lags far behind the actual needs of the people creating and using that technology. Regulations might be well-intentioned, but, the claim goes, they are almost always counterproductive.

However, that’s not what history shows. Legal regulations across many industries have played important roles in improving human wellbeing. For example, legal regulations boosted the safety of motor vehicles, cutting down the rates of people dying from accidents, and preventing the use of leaded petrol which had bad side-effects on people’s health.

Legal regulations played an important role in steering the chemical industry, away from the use of CFC chemicals in refrigeration and aerosol cans, thereby preventing the growth of the hole in the ozone layer and the associated increase in skin cancer.

Another example is the regulation of pasteurising milk, cutting back on incidents of childhood diseases which often proved fatal. Consider also how regulation cuts down on advertising that makes inflated claims that would mislead consumers, and how another set of regulations, this time in the banking industry, help to protect citizens from losing their savings due to reckless banking practices.

Of course, in each case, counter arguments were made at the time these regulations were introduced, and many counter arguments continue to be made to this day. Getting regulations right is a hard task, full of complexities.

But here’s what no-one would say. Regulating the nuclear power industry is hard. So let’s cancel all attempts to regulate that industry. Instead, each provider of nuclear power will from now be free to make their own decisions about cutting corners, commencing power generation before safety checks have been completed, employing personnel without adequate safety training, and disposing of nuclear waste in whatever way they happen to choose.

No, we should expect to work as hard on the appropriate regulations as we do on the technology itself. With the right frameworks in place, we can expect “many hands to make light work”. But without these frameworks, worse and worse mishaps are likely to arise as a result of AI systems that malfunction or are badly configured. And if no such frameworks exist, we should not be surprised if politicians attempt to close down the entire ASI project.

Singularity preoccupation

The seventh and final aspect of the Singularity Shadow can be called singuarlity preoccupation. That’s the view that shorter-term issues deserve little attention, since they pale in significance compared to the magnitude of changes arising from the technological singularity.

The view can be illustrated by a couple of graphical images that spread around social media in 2020. The first of these images portrays an increasing cascade of threats to the wellbeing of so-called normal life. One wave is labelled “COVID-19” and is shown along with advice that seems intended reassure people, “Be sure to wash your hands and all will be well”. Behind that first wave is a larger one labelled “Recession”, referring to economic trauma. And behind that is an even larger wave labelled “Climate Change”. This image gives a fair representation of what may be considered mainstream concern about the challenges faced by human civilization in the 2020s. The implication is that action more radical than mere hand-washing will be required.

The second graphical image was created by German foresight blogger Alexander Kruel under his Twitter account Xi Xi Du. It aims to provide additional perspective, beyond the concerns of mainstream culture. Two even larger waves bear down upon humanity. Dwarfing the threats from the first three waves is a wave representing bioterrorism. But dwarfing even that is a wave representing artificial intelligence.

This second image is thought-provoking. But it’s important to resist one dangerous conclusion that is sometimes drawn from this line of thinking.

That dangerous conclusion is that any effort spent on addressing potential existential threats such as runaway climate change or the spread of WMDs – weapons of mass destruction – is a distraction from effort that should be spent instead on accelerating the advent of superintelligence. In that analysis, superintelligence will produce comprehensive solutions to any threats such as runaway climate change or the spread of WMDs.

There are four reasons that conclusion is dangerous.

First, these existential threats could cause the collapse of society before we reach the level of creating ASI. Each of these threats poses serious risks in its own right. It is in combination that the greater challenges arise.

Second, even if these existential threats don’t actually destroy human civilization ahead of the emergence of ASI, they could cause such a set of political struggles that the social and academic environment in which ASI is developed would be damaged and increasingly dysfunctional. That raises the chance that, when ASI emerges, it will be flawed, leading in turn to the collapse of human civilization.

Third, even if these existential threats don’t destroy human civilization, either directly (case one above) or indirectly (case two above), they could still kill or maim many millions or even billions of humans.

Accordingly, human society needs to take actions to address two different timescales. In the short-term, action is needed to lessen the chance of any of these existential threats from detonating (literally or metaphorically). In the slightly longer term, action is also needed regarding particular risks and opportunities associated with the emergence of ASI.

If that sounds like double the amount of work, that’s not exactly true. The same set of methods, namely the Singularity Principles, address both these sets of issue in parallel.

Fourth, any apparent disregard by Singularity enthusiasts for such a set of humanitarian setbacks en route to the creation of ASI will, with good reason, cause revulsion by many observers against the community of people who take the idea of the Singularity seriously.

In other words, as with all the other characteristic elements of the Singularity Shadow, the associated attitude will drive many people away from giving the Singularity the attention it requires.

Looking forward

The various components of the Singularity Shadow all can, and should, be dispelled. This can happen as understanding spreads about the content of the Singularity Principles.

The result will be that the core nature of the Singularity will be seen much more clearly:

  • It will become easier for everyone to perceive the risks and opportunities about the emergence of Artificial Superintelligence that are actually the most important.
  • It will also become easier for everyone to appreciate the actions that need to be taken to raise the probability of beneficial ASI as opposed to destructive ASI.

Note: The following video from the Vital Syllabus contains a visual illustration of the Singularitarian Shadow. (Some aspects of the description of this shadow have evolved since the video was originally recorded.)

Recent Posts

RAFT 2035 – a new initiative for a new decade

The need for a better politics is more pressing than ever.

Since its formation, Transpolitica has run a number of different projects aimed at building momentum behind a technoprogressive vision for a better politics. For a new decade, it’s time to take a different approach, to build on previous initiatives.

The planned new vehicle has the name “RAFT 2035”.

RAFT is an acronym:

  • Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

RAFT is also a metaphor. Here’s a copy of the explanation:

When turbulent waters are bearing down fast, it’s very helpful to have a sturdy raft at hand.

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities – enormous opportunities and enormous risks:…

Rapid technological change tends to provoke a turbulent social reaction. Old certainties fade. New winners arrive on the scene, flaunting their power, and upturning previous networks of relationships. Within the general public, a sense of alienation and disruption mingles with a sense of profound possibility. Fear and hope jostle each other. Whilst some social metrics indicate major progress, others indicate major setbacks. The claim “You’ve never had it so good” coexists with the counterclaim “It’s going to be worse than ever”. To add to the bewilderment, there seems to be lots of evidence confirming both views.

The greater the pace of change, the more intense the dislocation. Due to the increased scale, speed, and global nature of the ongoing NBIC revolutions, the disruptions that followed in the wake of previous industrial revolutions – seismic though they were – are likely to be dwarfed in comparison to what lies ahead.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In short, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

The words just quoted come from the opening page of a short book that is envisioned to be published in January 2020. The chapters of this book are reworked versions of the scripts used in the recent “Technoprogressive roadmap” series of videos.

Over the next couple of weeks, all the chapters of this proposed book will be made available for review and comment:

  • As pages on the Transpolitica website, starting here
  • As shared Google documents, starting here, where comments and suggestions are welcome.

RAFT Cover 21

All being well, RAFT 2035 will also become a conference, held sometime around the middle of 2020.

You may note that, in that way that RAFT 2035 is presented to the world,

  • The word “transhumanist” has moved into the background – since that word tends to provoke many hostile reactions
  • The word “technoprogressive” also takes a backseat – since, again, that word has negative connotations in at least some circles.

If you like the basic idea of what’s being proposed, here’s how you can help:

  • Read some of the content that is already available, and provide comments
    • If you notice something that seems mistaken, or difficult to understand
    • If you think there is a gap that should be addressed
    • If you think there’s a better way to express something.

Thanks in anticipation!

  1. A reliability index for politicians? 2 Replies
  2. Technoprogressive Roadmap conf call Leave a reply
  3. Transpolitica and the TPUK Leave a reply
  4. There’s more to democracy than voting Leave a reply
  5. Superdemocracy: issues and opportunities Leave a reply
  6. New complete book awaiting reader reviews Leave a reply
  7. Q4 update: Progress towards “Sustainable superabundance” Leave a reply
  8. Q3 sprint: launch the Abundance Manifesto Leave a reply
  9. Q2 sprint: Political responses to technological unemployment Leave a reply