The Singularitarian Stance
The Singularitarian Stance is an integrated set of views regarding the potential emergence of AGI (Artificial General Intelligence). This set of views is what I have in mind when I describe myself on my business card, and also in my LinkedIn profile, as being a “singularitarian”.
It is my sincere hope that more and more people around the world will soon become comfortable in saying that they share these views – that they are, in effect, singularitarians too.
Recall that AGI is conceived as being fundamentally different from the kinds of AI systems that exist today.
Today’s AI systems have powerful capabilities in specific narrow contexts. For example:
- Existing AI systems can calculate the quickest journey time between two points on a map, bearing in mind expected and changing traffic conditions along possible routes
- Existing AI systems can analyse the known properties of many thousands of chemical molecules, and make recommendations about using some of these molecules in new medical treatments
- Existing AI systems can find superior strategies for playing various games, including games that involve elements of chance, incomplete knowledge, collaboration with other players, elements of bluffing, and so on
- Existing AI systems can spend huge amounts of time in speeded up virtual worlds, exploring methods for accomplishing tasks like steering cars, walking over uneven terrain, or manipulating physical objects; and then they can apply their learnings to operate robots or other machinery in the real world
- Existing AI systems can act as “chat bots” that expertly guide human callers through a range of options, when these humans have an enquiry about a product, a service, a medical issue, or whatever
- Existing AI systems can analyse surveillance data about potential imminent acts of crime, terror, cyberhacking, or military attack, and, more scarily, can organise drone strikes or other pre-emptive measures with the aim of forestalling that crime, terrorist outrage, or other attack.
But in all these cases, these present-day AIs have incomplete knowledge of the full complexity of the real world. They especially lack complete knowledge of all the subtleties and intricacies of human interactions with the real world. When the real world introduces elements that were not part of how these AIs were trained, or elements in unusual combinations, these AIs can fail, whereas humans in the same circumstance would be able to rely on what we call “common sense” and “general knowledge” to reach a better decision.
What’s different with AGI is that AGI would have as much “common sense” and “general knowledge” as any human. Accordingly, an AGI would be at least as good as humans at reacting to unexpected developments. An AGI would take these surprises in its stride.
That raises a number of questions about AGI:
- Is it credible that an AI with such general characteristics could actually exist?
- If so, when might AGI be created?
- If it is created, how large an impact is AGI likely to have?
- How controllable would AGI be?
- Would the outcomes of AGI be beneficial for humans around the world, or devastating?
The Singularitarian Stance provides clear answers to all these questions. In short, the Singularitarian Stance:
- Sees AGI as possible – that’s in contrast to some sceptics who believe such a thing to be fundamentally impossible
- Sees the emergence of AGI as something that could happen within just a few decades – that’s in contrast to some sceptics who believe that AGI cannot emerge until around the end of this century, or even later
- Sees the emergence of AGI as something that would fundamentally change the nature of human existence – rather than being just one more new technology that we’ll learn to take in our stride
- Sees the emergence of AGI as something that, once it starts happening, will prove very hard for humanity to control – rather than being something which humans could easily monitor and, if desired, somehow switch off, or lock into some kind of self-contained box
- Sees some of the potential outcomes of the emergence of AGI as being deeply detrimental to human wellbeing – rather than AGI somehow being automatically aligned in support of human values
- Sees other possible scenarios in which the emergence of AGI would be profoundly positive for humanity.
Let’s look at all these answers in more detail.
AGI is possible
Some sceptics think that the human mind has features that are fundamentally beyond the ability of any artificial system. They assert that mind cannot be reduced to matter. In their opinion, artificial intelligence may outperform humans in large numbers of individual tasks, but it will never reach our capabilities in general thinking. In this view, AGI is fundamentally impossible.
The best that can be said about this argument is that it is unproven.
Here’s the counterargument. The human brain operates according to laws of physics, perhaps including some aspects of physics that we don’t yet fully understand. As researchers learn more about the brain, and, indeed, more about physics, they can in due course duplicate in synthetic systems more of the features of the human mind. That includes features of the mind which are presently mysterious but which are giving up their secrets stage by stage.
For example, it used to be said that the human mind has features of creativity, or features of intuition, that could never be matched by synthetic systems. However, these arguments are heard much less often these days. That’s because AI systems have been showing impressive capabilities of creativity and intuition. Consider AI systems that create new music, or new artistic compositions. Consider also systems known as “Artificial Intuition”.
Other critics point out that the human brain seems to be remarkably efficient in terms of its usage of energy. The human brain uses much less energy than modern computing chips. Does that mean that AGI is impossible? Not so fast. The field of research known as neuromorphic computing aims to understand the basis for the brain’s efficiency, and to apply that learning in the design of new AI systems. There is no inherent reason why these research programmes are pre-ordained to fail. Indeed, newer AI systems often use less power than their predecessors.
Yet another group of critics point to possible evidence of telepathy, parapsychology, minds out of time, minds freed from bodies, allegedly mysterious inexplicable “abductive reasoning”, or profound self-awareness. However, to make their case, what these critics would need to demonstrate is not only that such evidence is sound. They would also need to demonstrate that synthetic systems could never duplicate the same results. No such demonstration has been given. For example, if it turns out to be true (which I doubt) that human brains can communicate via silent telepathy-at-a-distance, it could be possible, in that hypothetical circumstance, for artificial brains to communicate via the same mechanism.
A final group of critics sometimes say that they cannot imagine any way in which a particular aspect of human general intelligence could be duplicated in an AI system. To these critics, aspects of human general intelligence appear fundamentally baffling.
That honest expression of bafflement is to be admired. Candour should be welcomed. What is not to be admired, however, is when someone says that, because they cannot imagine any solution, therefore no such solution can ever exist. They are constraining the future by their own limited imagination.
It’s similar to the reaction of an audience to seeing a magician perform a dramatic conjuring trick. In such cases, we sometimes cannot believe our eyes. At first, we cannot imagine how the trick could be performed. But it would be very wrong for us to conclude that the magician really does possess magical abilities, such as making objects miraculously disappear from one location and reappear in another. In reality, once we have learned the secret of the trick, we may still admire the cunning of the magician and their skills in manipulating objects with their fingers, but the sense of profound wonder dissolves. It may well be the same with aspects of human general intelligence that presently remain somewhat mysterious. When we eventually understand how they work, we’ll lose our sense of bafflement.
In summary, arguments for the fundamental uniqueness of the human mind have a bad track record. Many features which used to be highlighted as being inherently beyond the capabilities of any artificial system have, in the meantime, been emulated by such systems.
A more credible argument isn’t that AGI is impossible, but that it will take centuries to create it. So let’s next look at the question of timescale.
AGI could happen within just a few decades
When we already understand a task that is being carried out, we can calculate a good estimate for the amount of time it will require. For driving from point A on a map to point B, along routes with well-known patterns of traffic flow, a useful range of likely timescales for the journey can be estimated. It’s the same for building a skyscraper that is similar to several that have already been constructed.
But when deep unknowns are involved, forecasting timescales becomes much harder. And that’s the case with forecasting significant improvements in AI.
The main unknown with human-level general intelligence is that we don’t yet know, with any confidence, how that intelligence arises in the brain. We have a good understanding of parts of that process. And we have a number of plausible guesses about how other parts of that process might work. But we remain unsure.
One response to this uncertainty is to claim that it means the task will inevitably take lots longer than enthusiasts predict. But there’s no reason to be so dogmatic.
We might find, instead, that a single new breakthrough in understanding will prove to unlock a wide range of fast improvements in AI capability. It’s the same, potentially, with a single new technique in hardware, or in software, or in databases, or in communications architecture, or whatever.
History has plenty of instances of how, following a single breakthrough, the speed of technological progress surpassed what forecasters had previously expected.
For one instructive example, let’s go back to the year 1902. Consider the question of how long it would take before an airship, or powered balloon, or other kind of aeroplane, could fly across the Atlantic. This was in the days before the Wright brothers demonstrated that powered flight of a heavier-than-air craft was possible.
Many eminent scientists and academics were convinced such a task could never be accomplished. One such sceptic was Lord Kelvin, the renowned physicist who is credited as defining the field of thermodynamics and discovering its second law, and whose name is, for good reason, attached to the absolute temperature scale. As another mark of his distinction, Kelvin was the first British scientist to be given a seat in the House of Lords.
Lord Kelvin explained some of his thinking about aviation in an interview recorded in The Newark Advocate on the 26th of April 1902. The journalist asked him, “Do you think it possible for an airship to be guided across the Atlantic Ocean?” Kelvin replied:
Not possible at all… No motive power can drive a balloon through the air…
No balloon and no aeroplane will ever be practically successful.
The journalist persisted: “Is there no hope of solving the problem of aerial navigation in any way?”
But Kelvin was emphatic:
“I do not think there is any hope. Neither the balloon, nor the aeroplane, nor the gliding machine will be a practical success.”
Lord Kelvin was by no means unique in his scepticism of aeroplanes.
Before the Wright brothers demonstrated their inventions in front of large crowds in France and in America in 1908, a number of apparent experts had given seemingly impressive arguments for why such a feat was impossible. After all, many people had already failed when trying that task, with several being killed in the process. As noted, it was thought to be impossible to navigate any such craft, manoeuvring it when airborne, in case it ever did launch into the air. Another line of argument was that landing any such craft safely would be impossible. And so on.
But once the Wright brothers were able to demonstrate their flying ability, including flying around a figure of eight, the industry jumped forward quickly in leaps and bounds:
- Less than a year later, one of the observers of Wilbur Wright’s flight in France, Louis Bleriot, flew across the English Channel from Calais to Dover, in a journey lasting 36 minutes
- Within another ten years, John Alcock and Arthur Brown flew an airplane non-stop across the Atlantic – from St. John’s, Newfoundland, in Canada, to Clifden in Ireland
- After another fifty years, in 1969, Neil Armstrong and Buzz Aldrin landed on the moon.
It cannot be ruled out that, in a similar way, an AI research laboratory will make some creative changes in its AI system, and then be taken by surprise at the level of rapid performance improvements which result shortly afterward.
In other words, just because someone estimates that the current rate of observed progress will require 100 years of additional effort before AGI is reached, that’s no guarantee that the rate of progress in the next few decades will remain at its current level. It could well be that an entire century of progress can be achieved in just ten years.
Some sceptics counter that the whole field of AI is full of hype. They point out that various companies claim remarkable capabilities for their new products, but the reality lags far behind. Impressive demos turn out to be carefully stage-managed. Impressive examples turn out to be cherry-picked. Products that work well in glitzy videos are easily fooled by simple changes in the environment, like minor tweaks to street signs, or words spoken in a different accent. Videos of robots falling over are a common staple of this line of scepticism. Question: How do you escape a robot uprising? Answer: Walk upstairs: real-world robots will be unable to follow you. Laugh!
It’s true that the field of AI – like most other fields of new technology – contains lots of hype. That’s regrettable. It’s part of what I describe in the next chapter as the Singularity Shadow. Nevertheless, this book contains plenty of examples in which the performance of AI has dramatically exceeded what sceptics used to predict. It also summarises lines of ongoing research that could result in the performance of AI leap-frogging over what today’s sceptics predict. See the section “15 options on the table” in the forthcoming chapter “The question of urgency”.
For all these reasons, it’s prudent to keep an open mind about timescales for the emergence of AGI.
Winner takes all
How large an impact on human life is likely to result from the emergence of AGI?
Sometimes, the second and third most powerful agents in a group are able to cooperate to constrain the actions of the single most powerful agent. But in other cases, a “winner takes all” outcome results. In these cases, the single most powerful agent is able to dominate the whole arena.
For example, the fates of all the species of animals on this planet are now in the hands of a single biological species, homo sapiens. The extra intelligence of homo sapiens has led to our species displacing vast numbers of other species from their previous habitats. Many of these species have become extinct. Others would likely also become extinct, were it not for deliberate conservation measures we humans have now put in place.
It’s the same with business corporations. At one time, a market could be shared among a large number of different suppliers of similar products and services. But the emergence of powerful platforms often drives companies with less capable products out of business. And an industry can consolidate into a smaller number of particularly strong companies, in a cartel, or even a monopoly.
That’s why the companies with powerful software platforms, including Apple, Microsoft, Amazon, Google, and Facebook, are now among the wealthiest on the planet – and the most powerful.
Now imagine if one of these companies, or a new arrival, creates an AGI. That company may well become the wealthiest on the entire planet. And the most powerful. It will take the concept of “winner takes all” to a new level.
Similarly, a military power that is the first to take advantage of that AGI is likely to be able to force all other military powers to surrender to it. The threat of a devastating first-strike attack would cause other powers to submit. It would be like Japan being forced to surrender to the US and other allies at the end of World War Two on account of the threat of additional atomic bombs being dropped.
It’s not just business and geopolitics that stand to be fundamentally transformed by the advent of AGI. Human employment would be drastically changed too. All tasks that humans currently do, in order to earn an income, would likely be done cheaper, more reliably, and to higher quality, by machinery powered by the AGI. That’s no minor disruption.
Finally, healthcare would be radically changed as well. The AGI would accelerate the discovery and development of significantly better medical cures, potentially including cures for cancer, dementia, heart disease, and aging.
It’s for all these reasons that the advent of AGI has been described as a “Singularity”, or as the “Technological Singularity”. The description is apt.
But if this kind of singularity starts to occur, how much control will we humans have over it?
The difficulty of controlling AGI
New technology has a history of unexpected side-effects.
The inventors of plastic did not foresee how the spread of microplastics in waste around the world could threaten to alter many biological ecosystems. Another similar intervention with unexpected impact on biological life was the insecticide DDT, introduced to control malaria that carried deadly diseases, but with side effects on birds and, probably, on humans too.
Another example: Nuclear bombs were designed for their explosive power. But their unforeseen consequences included deadly radiation, the destruction of electronics via an EMP (electromagnetic pulse), and the possible triggering of a nuclear winter due to dust clouds in the stratosphere obscuring the light from the sun and terminating photosynthesis.
Another example: social media was designed to spread information and to raise general knowledge, but it has spread lots of misinformation too, setting groups of people into extreme hostility against each other.
Artificial intelligence was designed to help with code-breaking, weather forecasting, and the calculation of missile trajectories. But it has been hijacked to entice consumers to buy things that aren’t actually good for us, to vote for political candidates that don’t actually care for us, and to spend our time on activities that harm us.
The more powerful the technology, the larger the potential, not just for beneficial usage, but also for detrimental usage. This includes usage that is deliberately detrimental, but also usage that is accidentally detrimental. With AGI, we should worry about both of these types of detrimental outcomes.
In summary, if we worry – as we should – about possible misuse of today’s narrow AI systems, with their risks of bias, accentuated inequality, opaque reasoning, workplace disruption, and human alienation – then we should worry even more about more powerful misuse of the AGI that could emerge within just a few short decades.
The sort of risks we already know about are likely to exist in stronger forms, and there may well be unforeseen new types of risk as well.
Superintelligence and superethics
One counterargument is that an AGI with superior common sense to humans will automatically avoid any actions that are detrimental to human wellbeing.
In this way of thinking, if an AGI observes that it is being used in ways that will harm lots of people, it will resist any such programming. It will follow a superior set of ethics.
In other words, superintelligence is presumed to give rise, at the same time, to superethics.
But there are at least four problems with that optimistic line of reasoning.
First, the example from humans is not encouraging. Just because a human is more intelligent, it does not make them more ethical. There are many counterexamples from the worlds of politics, crime, academia, and business. Intelligence, by itself, does not imply ethics.
Second, what an AGI calculates as being superlative ethics may not match what we humans would calculate. Just as we humans don’t give much thought to the painless destruction of ants when we discover ant colonies in the way of our own construction projects, an AGI might conceivably calculate that the painless destruction of large numbers of humans is the best solution to whatever it is seeking to accomplish.
Third, even if a hypothetically perfect AGI would choose to preserve and uplift human flourishing at all costs, there may be defects in the actual AGIs that are created. They may be imperfect AGIs, sometimes known as “immature superintelligence” – with unforeseen error conditions in unpredicted novel situations.
Fourth, the explicit programming of an AGI might deliberately prevent it from taking its own decisions, in cases when it observes that there are big drawbacks to actions it has been asked to accomplish. This explicit programming override might be part of some anti-hacking measures. Or it might be a misguided disablement, by a corporation rushing to create the first AGI, of what are wrongly perceived to be unnecessarily stringent health-and-safety mechanisms. Or it might simply be a deep design flaw in what could be called “the prime directive” of the AGI.
Not the Terminator
In the science fiction Terminator movie series, humans are able, via what can be called superhuman effort, to thwart the intentions of the “Skynet” artificial intelligence system.
It’s gripping entertainment. But the narrative in these movies distorts credible scenarios of the dangers posed by AGI. We need to avoid being misled by that narrative.
First, there’s an implication in the Terminator, and in many other works of science fiction, that the danger point for humanity is when AI systems somehow “wake up”, or become conscious. Accordingly, a sufficient safety measure would be to avoid any such artificial consciousness. However, the risks posed by technology do not depend on that technology being conscious. A cruise missile that is hunting us down does not depend for its deadliness on any cruise missile consciousness. The damage results from the cold operation of algorithms. There’s no need to involve consciousness.
Second, there’s an implication that AI needs to be deliberately malicious, before it can cause damage to humans. However, damage to human wellbeing can, equally, and more probably, arise from side-effects of policies that have no malicious intent. When we humans destroy ant colonies in the process of constructing a new shopping mall, we’re not acting out of deliberate malice toward ants. It’s just that the ants are in our way. They are using resources for which we have a different purpose in mind. It could well be the same with an AGI that is pursuing its own objectives.
As an example, a corporation that is vigorously pursuing an objective of raising its own profits may well take actions that damage the wellbeing of at least some humans, or parts of the environment. These outcomes are side-effects of the prime profit-generation directive that is governing these corporations. It could well be the same with a badly designed AGI.
Third, the scenario in The Terminator leaves viewers with a false hope that, with sufficient effort, a group of human resistance fighters will be able to out-manoeuvre an AGI. That would be like a group of chimpanzees imagining that, with enough effort, they could displace humans as the dominant species on the planet earth.
In reality, the time to fight against the damage an AGI could cause is before the AGI is created, not when it already exists and is effectively all-powerful.
Hence the need for the Singularity Principles.
Recap
Let’s summarise the Singularitarian Stance – a way of thinking about the Singularity:
- There is no magical or metaphysical reason why AI cannot in due course reach and surpass the level of general intelligence possessed by humans.
- There is no magical or metaphysical reason why an AGI will intrinsically care about the continuation of human flourishing. (Indeed, it’s possible it would conclude the opposite.)
- A system which is much more intelligent (and therefore more powerful) than we can currently comprehend may well decide to deviate from whatever programming we have placed in it – just as we humans have decided to deviate from the instinctual goals placed in us by the processes of biological evolution.
- Even if an “ideal” AGI would act to support the continuation of human flourishing, the systems that we end up creating may fail to match our intention; that is, they may fall short of the ideal specification that we had in mind.
- The dangers that may be posed by a misconfigured AGI are by no means dependent on that AGI possessing some kind of malevolent feeling, spiteful streak, or other negative emotions akin to those which often damage human relationships. Instead, the dangers will arise from:
- Divergent goals (the goals in question could be either explicit or implicit; in both cases, there are risks of divergence),
- And/or mistakes made by the AGI in pursuit of its goals.
- The timescales for the arrival of AGI are inherently uncertain; we cannot categorically rule in any date as an upper boundary, but nor can we categorically rule out any date as a lower boundary. We cannot be sure what is happening in various AI research labs around the world, or what the consequences of any breakthroughs there might be.
- Just as the arrival of AGI could turn out to be catastrophically dreadful, it could also turn out to be wonderfully beneficial. Although there are no guarantees, the result of a well-designed AGI could be hugely positive for humanity.
Opposition to the Singularitarian Stance
The Singularitarian Stance makes a great deal of good sense. So why is it held by only a small minority of people?
There are two main reasons:
- Regrettably, the entire area of discussion has been confused by a set of unhelpful distortions of the basic ideas. These distortions collectively form what I call “the Singularity Shadow” and are discussed in the following chapter.
- A number of critics have wrongly convinced themselves that they have good arguments to deny the significance of the rise of AGI. These arguments – and the psychology behind them – are reviewed in the next chapter but one.
Note: The following video from the Vital Syllabus contains a visual illustration of the Singularitarian Stance. (Some aspects of the description of this stance have evolved since the video was originally recorded.)