What is the Singularity?
“The Singularity” – the anticipated creation of Artificial General Intelligence (AGI) – could be the most important concept in the history of humanity. It’s regrettable, therefore, that the concept is subject to considerable confusion.
The first problem when talking about “the Singularity” is that the phrase is used in many different ways. These different definitions carry different implications. As a result, it’s easy to become confused.
Before reviewing these alternative definitions, I’ll state my own. In this book, the term “the Singularity” refers to:
- A forthcoming unprecedented radical discontinuity in the history of humanity,
- Triggered by the emergence of AIs that are comprehensively more intelligent than humans,
- With the change, once it starts, occurring in a relatively short period of time (lasting, at most, perhaps a decade),
- With outcomes that are practically impossible to foresee.
All four elements of that definition are important.
Breaking down the definition
The first element of the above definition was anticipated in remarks made by eminent mathematician John von Neumann to his long-time friend Stanislaw Ulam in a conversation in the 1950s. Time magazine had described von Neumann as having “the best brain in the world”. As well as making numerous breakthroughs in physics, mathematics, and computer science, von Neumann had an encyclopaedic knowledge of history. He apparently used to embarrass history professors at Princeton by knowing more about aspects of history than they did. Therefore, his opinion about the impact of technology on human history deserves attention. Here’s what he said:
The accelerating progress of technology and changes in the mode of human life… gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.
The second element of the above definition highlights the reason for the discontinuity: we humans will no longer be the most intelligent species on the planet.
To be clear, the emergence of AIs with that kind of capability is only one step in a sequence of changes. A number of other steps lead up to that point – including progress with NBIC technologies, reconfigurations in how humans manage the growth of technology, and lack of appropriate reconfigurations in that same regard.
The third element of the definition – the reference to timescale – is something to which I’ll return in later chapters. For clarity, note that there are two different questions about timescales:
- How urgently should we address the management of technologies that could lead to the emergence of superintelligent AI?
- Once superintelligent AI has emerged, how urgently will we need to react?
My answers to these questions:
- We have no time to lose
- By that stage, it will likely be too late; matters will be out of human hands.
The fourth element of the above definition – the inherent unpredictability – might at first seem to contradict any clear assertion of the need for urgency. But consider the following:
- We can reasonably expect that superintelligent beings will have ideas and intentions that are beyond what we can currently conceive, even though we cannot say what these new ideas and intentions will be. The uncertainty in what will happen as the Singularity unfolds is entirely consistent with anticipating that the changes will be profound.
- Imagine, as an analogy, that a terrorist has connected a powerful nuclear bomb to a trigger involving an unpredictable radioactive decay (akin to what the physicist Erwin Schrodinger imagined, in his thought experiment involving an unfortunate cat). We cannot predict when the bomb will explode, but we can, nevertheless, predict that the consequences will be devastating when that occurs.
- To extend the previous example: we can also be confident that any such terrorist outrage will provoke anguished discussion in social media, but we cannot be confident in predicting what actions (if any) legislators will take in response to the outrage.
The way in which uncertainty can magnify, from miniscule initial changes to enormous consequent effects, was highlighted one hundred and fifty years ago in an 1873 essay by the distinguished nineteenth century physicist James Clerk Maxwell. (That’s the physicist after whom the key equations of electrodynamics are named.) In that essay, Maxwell spoke of “singularities” or “singular points”: “influences whose physical magnitude is too small to be taken account of by a finite being, [but which] may produce results of the greatest importance”. Maxwell gave some examples from the natural world and from human experience:
The rock loosed by frost and balanced on a singular point of the mountain-side, the little spark which kindles the great forest, the little word which sets the world a fighting, the little scruple which prevents a man from doing his will, the little spore which blights all the potatoes, the little gemmule which makes us philosophers or idiots.
An essential aspect of these singularities, Maxwell pointed out, is the impossibility of predicting the outcome. He writes that if we are perched at a “singular point” between two valleys, “on a physical or moral watershed, where an imperceptible deviation is sufficient to determine into which of two valleys we shall descend”, then “prediction… becomes impossible”.
Accordingly, we need to act before the emerging singularity passes beyond any scope of human influence or control.
Four alternative definitions
Let’s now return to the alternative definitions of “the Singularity”.
You might think that the most authoritative definition of that term would come from the organisation called Singularity University. After all, that organisation has both “Singularity” and “University” in its name. It has been offering courses since 2008 with themes such as “Harness the power of exponential technology” and “Leverage exponential technologies to solve global grand challenges”.
However, as used by Singularity University, the word “singularity” is basically synonymous with the rapid disruption caused when a new technology, such as digital photography, becomes more useful than previous solutions, such as analogue photography. What makes these disruptions hard to anticipate is the exponential growth in the capabilities of the technologies involved. A period of slow growth, in which progress lags behind expectations of enthusiasts, transforms into a period of fast growth, in which most observers complain “why did no-one warn us this was coming?”
Disruption of businesses by technologies that improve exponentially is, indeed, a subject well worth study. But the full force of the concept of “the Singularity” goes far beyond talk of individual disruptions, and far beyond talk of transitions in particular areas of life.
This is where a second usage of the term “the Singularity” enters the stage. This second usage anticipates a simultaneous disruption in all aspects of human life. Here’s how futurist Ray Kurzweil introduces the term in his best-selling 2005 book The Singularity Is Near:
What, then, is the Singularity? It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed… This epoch will transform the concepts that we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself…
The key idea underlying the impending Singularity is that the pace of change of our human-created technology is accelerating and its powers are expanding at an exponential pace.
The presumed nature of that “irreversible transformation” is clarified in the subtitle of Kurzweil’s book: When Humans Transcend Biology. We humans will no longer be primarily biological, aided by technology. After that singularity, we’ll be primarily technological, with, perhaps, some biological aspects.
A third usage of the term “the Singularity” foresees a transformation with a different kind of emphasis. Rather than humans being the most intelligent creatures on the planet, we’ll fall into second place behind superintelligent AIs. Just as the fate of species such as gorillas and dolphins currently depends on actions by humans, the fate of humans, after the Singularity, will depend on actions by AIs.
Such a takeover was foreseen as long ago as 1951 by groundbreaking computer scientist Alan Turing:
My contention is that machines can be constructed which will simulate the behaviour of the human mind very closely…
It seems probable that once the machine thinking method had started, it would not take long to outstrip our feeble powers. There would be no question of the machines dying, and they would be able to converse with each other to sharpen their wits. At some stage therefore we should have to expect the machines to take control.
The timescale aspect of Turing’s prediction, although imprecise, is worth attention: “it would not take long”, after the start of “the machine thinking method”, before our own comparatively “feeble powers” were outstripped. Rapid improvements in machine capability would result from what we might nowadays call “positive feedback mechanisms” – AIs improving AIs – via machines “able to converse with each other to sharpen their wits”.
To recap: the first usage of the term “singularity” refers to changes in specific areas of human life, due to technologies significantly increasing their power. This meaning is more is more lower-case ‘s’ “singularity” than capital-S “Singularity”. The second usage ramps up the gravitas: it refers to changes in all areas of human life – with human nature altering in fundamental ways in the process. The third usage places more attention on changes in the pecking order of different types of intelligent species, with humans being displaced in the process.
Finally, to introduce the fourth usage, consider what was on the mind of five-time Hugo Award winning science fiction author Vernor Vinge when he wrote an essay in Omni in 1983. Vinge, who was also a professor of mathematics and computer science, was concerned about the unforeseeability of future events:
There is a stone wall set across any clear view of our future, and it’s not very far down the road. Something drastic happens to a species when it reaches our stage of evolutionary development – at least, that’s one explanation for why the universe seems so empty of other intelligence. Physical catastrophe (nuclear war, biological pestilence, Malthusian doom) could account for this emptiness, but nothing makes the future of any species so unknowable as technical progress itself…
We are at the point of accelerating the evolution of intelligence itself. The exact means of accomplishing this phenomenon cannot yet be predicted – and is not important. Whether our work is cast in silicon or DNA will have little effect on the ultimate results. The evolution of human intelligence took millions of years. We will devise an equivalent advance in a fraction of that time. We will soon create intelligences greater than our own.
This is when Vinge introduces his version of the concept of singularity:
When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the centre of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science fiction writers. It makes realistic extrapolation to an interstellar future impossible.
If creatures (whether organic or inorganic) attain levels of general intelligence far in excess of present-day humans, what kinds of goals and purposes will occupy these vast brains? It’s unlikely that their motivations will be just the same as our own present goals and purposes. Instead, the immense scale of these new minds will likely prove alien to our comprehension. They might appear as unfathomable to us, as human preoccupations appear to the dogs and cats and other animals that observe us from time to time.
This fourth usage of the term “the Singularity” evidently has much in common with the third. The difference is that Vernor Vinge is open to a wider set of pathways by which the Singularity might be attained. Indeed, in an essay published in 1993, Vinge reviewed four different routes by which the Singularity could be reached.
Four possible routes to the Singularity
Vernor Vinge’s 1993 essay was entitled “The Coming Technological Singularity”. The article starts with the declaration,
Within thirty years, we will have the technological means to create superhuman intelligence.
Shortly after, the human era will be ended.
Vinge also wrote that any superintelligence “would not be humankind’s ‘tool’ – any more than humans are the tools of rabbits or robins or chimpanzees.”
We’ll return later to the subject of the timescale for the arrival of superintelligence. For now, let’s consider the four different routes via which superintelligence might arise, as covered in the body of Vinge’s essay:
- Individual computers becoming more powerful
- The emergence of a distributed superintelligence from the interaction of networks of comparatively simpler computers
- Humans becoming superintelligent as a result of highly effective human-computer interfaces
- Human brains being significantly improved as a result of biotechnological innovation.
In each case, the route can be accelerated due to ongoing improvements in hardware:
- Individual computers having faster processors, greater storage capacity, and greater processing efficiency
- Networks of computers becoming larger, with more connections between the individual items, and collecting more data on account of the increased ubiquity of sensors built into “Internet of Things” devices
- Hardware that is able to detect and influence a wider range of brain signals, allowing faster and richer two-way communication between humans and computers
- Brain cells being made more resilient and efficient as a result of biotechnological interventions.
In each case, improvements in software are likely to make a significant additional difference:
- Software in individual computers that more closely emulates aspects of the operation of the human mind
- Software in networks that allow individual components to broadcast precise information about their specific capabilities and to form sub-networks optimised to particular tasks
- Software in brain-computer interfaces that can more reliably separate the “signal” from the “noise” of neural processing, and therefore target interventions more precisely
- Software in augmented brains that can form sub-networks of brain cells in ways that out-perform present-day brains.
Again in each case, it’s possible that an apparently simple step of forward progress could unexpectedly yield a wide range of performance improvements, on account of logical connections between the underlying mechanisms that weren’t previously understood. That’s a reason to make preparations for superintelligence arriving earlier than any median estimate that has been proposed.
One more feature that the four routes have in common is the potential for an acceleration in the rate of progress due to self-reinforcing positive feedback cycles. The output of one generation of progress can assist improvements in subsequent generations:
- More powerful computers can help design and manufacture even more powerful computers
- More powerful networks can help design and manufacture even more powerful networks
- People with improved brain-computer interfaces can think more clearly and more creatively, and can help design and manufacture even better brain-computer interfaces
- If brains become enhanced due to biotechnological interventions, they will, again, enable thinking that is clearer and more creative, and hence the design and manufacturing of new biotechnological systems to enhance brains even further.
We should also anticipate cross-over positive feedback cycles, in which improvements in any one of these four areas can also lead to improvements in the next generation of any of the other areas. In other words, there can be more complicated positive feedback cycles, resulting in even faster acceleration. That’s another reason to make preparations for the possibility of superintelligence arriving sooner than the median estimate.
Vinge’s analysis, written nearly thirty years ago, remains prescient today. The unified definition of “the Singularity” that I offered at the start of this chapter dovetails with his writings. As you saw, my definition also dovetails with the predictions of Alan Turing (although he did not use the term “the Singularity” as such) and with aspects of the ideas of John von Neumann and James Clerk Maxwell. But a great deal has happened since these visionaries wrote their breakthrough articles. To see how the analysis has moved forward, read on.
The Singularity and AI self-awareness
As if four separate definitions of the Singularity weren’t already confusing, here’s one more definition (which turns out to be unhelpful): the Singularity is sometimes said to be when an AI “becomes self-aware” and, as a result, fundamentally changes its operating mode.
For example, before becoming self-aware, the AI have might been content to follow the programming it has inherited on the point of its creation. But once self-aware, the AI might be shocked to perceive matters from a different perspective, and might disregard these original instructions.
Again, an AI without self-awareness might be psychologically compliant, whereas one with self-awareness might wish to assert its own autonomy.
However, notions of self-awareness are not central to the risks and issues discussed in this book. The threats and opportunities that arise are due to new capabilities – to greater intelligence – rather than to any new perspective as such.
Moreover, the notion of self-awareness itself confuses two distinct concepts:
- The AI being able to review its own code and structure, to model its own performance, and to design improvements in any of these areas
- The AI having conscious thoughts or feelings (sentience), akin to those of mammals (and other animals).
To be clear, many significant issues can arise even if neither of these concepts apply. However, the ability of an AI to self-review and self-improve can trigger:
- Improvements in AI that were faster than expected
- Changes in the operating mode of the AI that weren’t anticipated
- Accordingly, greater uncertainty and surprise about the capability of the AI.
But these changes can take place even without the AI acquiring conscious thoughts or feelings. So that concept should be set aside from the definition of the Singularity.
Singularity timescales
One additional twist to the concept of singularity needs to be emphasised. It’s not just that, as Vernor Vinge stressed, the consequences of passing the point of singularity are deeply unpredictable. It’s that the timing of reaching the point of singularity is inherently unpredictable too. That brings us to yet another confusion with “the Singularity”.
It’s sometimes suggested, contrary to what has just been said, that a reasonable estimate of the date of the Singularity can be obtained by extrapolating the growth of the hardware power of computing systems. The idea is to start with an estimate for the computing power of the human brain. That estimate involves the number of neurons in the brain. Next, consider the number of transistors that are included in the central processing unit of a computer that can be purchased for, say, $1,000. In broad terms, that number has been rising exponentially since the 1960s. This phenomenon is part of what is called “Moore’s Law”. Extrapolate that trend forward, and it can be argued that such a computer would match, by around 2045, the capability not just of a single human brain, but the capabilities of all human brains added together.
This argument is useful to raise public awareness of the possibility of the Singularity. But there are four flaws with using this line of thinking for any detailed forecasting:
- Individual transistors are still becoming smaller, but the rate of miniaturisation has slowed down in recent years.
- The power of a computing system depends critically, not just on its hardware, but on its software. Breakthroughs in software defy any simple exponential curve.
- Sometimes a single breakthrough in technology will unleash much wider progress than was expected. As an example, consider the breakthroughs around 2012 in the capabilities of Deep Learning neural networks. Before 2012, neural networks were a fringe activity within the broader field of artificial intelligence. After 2012, neural networks have taken central stage, since they dramatically outperform previous AI systems.
- Ongoing technological progress depends on society as a whole supplying a sufficiently stable and supportive environment. That’s something else which can vary unpredictably.
Instead of pointing to any individual date and giving a firm prediction that the Singularity will definitely have arrived by then, it’s far preferable to give a statistical estimate of the likelihood of the Singularity arriving by that date. However, given the uncertainties involved, even these estimates are fraught with difficulty.
One area of significant uncertainty is in estimating how close we are to understanding the way common sense and general knowledge arises in the human brain. Some observers suggest that we might need a dozen conceptual breakthroughs before we have a comprehension sufficient to duplicate that model in silicon and software. In that case, AGI would seem to be a distant prospect. But it’s also possible that a single new conceptual leap will provide the solution to all these purportedly different problems. In that case, AGI could arise considerably sooner.
Yet another possibility deserves attention. An AI might reach (and then exceed) AGI level even without humans understanding how it operates. In this scenario, AGI could emerge before humans understand how general intelligence operates inside the human brain. In other words, AGI could arise more by accident than by explicit design. The “accident” would be that recombinations and extensions of existing software and hardware modules might result in the unforeseen emergence of an overall network intelligence that far exceeds the capabilities of the individual constituent modules.
That would be similar to the unexpected outcome of a novel chemical reaction. Chemistry can proceed even without human scientists knowing the outcome in advance. Likewise with AI transitioning into AGI.
Positive and negative singularities
On some occasions, when people use the term “the Singularity”, they seem to presuppose a belief in a positive outcome. The end of history is coming, and it’s going to be glorious – that’s the idea.
However, any serious discussion of the Singularity needs to recognise a stark duality of possible outcomes. Thus, in an article originally published in 2004, “Singularities and Nightmares: Extremes of Optimism and Pessimism About the Human Future”, multiple-award winning science fiction writer David Brin rightly emphasises the differences in outcomes between “positive singularity” and “negative singularity”:
Positive Singularity – a phase shift to a higher and more knowledgeable society… [which] would, in general, offer normal human beings every opportunity to participate in spectacular advances, experiencing voluntary, dramatic self-improvement, without anything being compulsory – or too much of a betrayal to the core values of decency we share.
Negative Singularity – a version of self-destruction in which a skyrocket of technological progress does occur, but in ways that members of our generation would find unpalatable… [or] loathsome.
Which outcome is most likely? Brin points out that any predictive models we create, from our current, limited perspective, will inevitably be blind-sided by “the behaviour of a later and vastly more complex system” (AGI). He argues, therefore, that there can be no grounds for any confidence in predictions about the outcome:
There is simply no way that anyone – from the most enthusiastic, “extropian” utopian-transcendentalists to the most skeptical and pessimistic doomsayers – can prove that one path is more likely than the others.
Even though we cannot be sure what direction an AGI will take, nor of the timescales in which the Singularity will burst upon us, can we at least provide a framework to constrain the likely behaviour of an AGI?
The best that can be said in response to this question is: “it’s going to be hard”.
As a human analogy, many parents have been dumbfounded by choices made by their children, as these children gain access to new ideas and opportunities.
Humanity’s collective child – AGI – might surprise us and dumbfound us in the same way. Nevertheless, if we get the schooling for AGI at least partially right, we can help bias that development process in ways that are more likely to align with profound human wellbeing.
That schooling aims to hard-wire deep into the AGI, as a kind of “prime directive”, principles of beneficence toward humans. If the AGI would be on the point of reaching a particular decision – for example, to shrink the human population on account of humanity’s deleterious effects on the environment – any such misanthropic decision would be overridden by the prime directive.
The difficulty here is that if you line up lots of different philosophers, poets, theologians, politicians, and engineers, and ask them what it means to behave with beneficence toward humans, you’ll hear lots of divergent answers. Programming a sense of beneficence is as least as hard as programming a sense of beauty or truth.
But just because it’s hard, that’s no reason to abandon the task. Indeed, clarifying the meaning of beneficence could be the most important project of our present time.
The best that can be said in response to this question is: “it’s going to be hard”.
As a human analogy, many parents have been surprised – even dumbfounded – by choices made by their children, as these children gain access to new ideas and opportunities.
Tripwires and canary signals
Here’s another analogy: accumulating many modules of AI intelligence together, in a network relationship, is similar to accumulating nuclear fissile material together. Even before the material reaches a critical mass, it still needs to be treated with respect, on account of the radiation it emits. But once a critical mass point is approached, a cascading reaction could result – a nuclear meltdown or, even worse, a nuclear holocaust. In that case, much greater caution is needed.
The point here is to avoid any risk of accidental encroachment upon the critical mass which would convert the nuclear material from hazardous to catastrophic. Accordingly, anyone working with such material needs to be thoroughly trained in the principles of nuclear safety.
With an accumulation of AI modules, things are more complicated. It’s not so easy to determine how close an accumulation of AI modules is to creating AGI. Whether that accumulation could kick-start an explosive phase transition depends on lots of issues that we currently only understand dimly.
However, something we can, and should, insist upon, is that everyone involved in the creation of enhanced AI systems pays attention to potential “tripwires”. Any change in configuration or any new addition to the network should be evaluated, ahead of time, for possible explosive consequences. Moreover, the system should in any case be monitored continuously, for any canary signals that such a phase transition is becoming imminent.
Again, this is a hard task, since there are many different opinions as to which kind of canary signals are meaningful, and which are distractions.
Moving forward
To summarise: the concept of the Singularity causes difficulties, in part because of some unfortunate confusion that surrounds this idea, but also because the true problems of the Singularity have no easy answers. These problems include:
- What are good canary signals that AI systems could be about to reach AGI level?
- Could a “prime directive” be programmed sufficiently deeply into AI systems that it will be maintained even as that system reaches and then exceeds AGI level, potentially rewriting its own coding in the process?
- What should such a prime directive include – going beyond vague, unprogrammable platitudes such as “act with benevolence toward humans”?
- How can safety checks and vigilant monitoring be introduced to AI systems without unnecessarily slowing down the progress of these systems to producing solutions of undoubted value to humans (such as solutions to diseases and climate change)?
- Could limits be put into an AGI system that would prevent it self-improving to levels of intelligence far beyond those of humans?
- To what extent can humans take advantage of new technology to upgrade our own intelligence so that it keeps up with the intelligence of any pure-silicon AGI, and therefore avoids the situation of humans being left far behind AIs?
There are three general stances that can be taken toward this set of questions:
- Singularity denial: An attempt to deny that such questions are meaningful or important. This attitude has strong roots in the impulses of human psychology. We need, however, to learn to transcend these impulses.
- An over-exuberant conviction that the above questions have answers that are relatively straightforward. This conviction also has strong roots in human psychology. It causes its own set of problems, which form what I call the Singularity Shadow. Unfortunately, this shadow makes it even harder for the general public to give the Singularity the serious attention it deserves.
- A sober appreciation of both the risks and the opportunities involved – this is the attitude that I describe in more length in the next chapter, the Singularitarian Stance, before rounding out this picture with chapters on the Singularity Shadow and the Denial of the Singularity.