The denial of the Singularity
The Singularity Shadow provides part of the explanation to why more people don’t pay more serious attention to the remarkable prospect of the emergence of Artificial General Intelligence, AGI, also known as the Technological Singularity.
People we can call Singularity critics – people who are uncomfortable with the idea that an AGI might arise and totally transform the human condition, in ways that could be profoundly positive but could also be deeply destructive – these Singularity critics latch on to attitudes or statements from within the Singularity Shadow.
These statements or attitudes include: Singularity Timescale Determinism, Singularity Outcome Determinism, Singularity Hyping, Singularity Risk Complacency, Singularity Term Overloading, Singularity Anti-Regulation Fundamentalism, and Singularity Preoccupation.
These Singularity critics say, or think, perhaps subconsciously, that if Singularity enthusiasts make these kinds of mistakes, then the whole idea of the Singularity can be ignored, or deprioritised.
Now that’s a bad error of reasoning. The arguments for taking the Singularity seriously – the arguments in the Singularitarian Stance – hold up strongly, separately from the unfortunate and unhelpful confusion that is introduced by these shadow statements or attitudes.
But what motivates at least some of the Singularity critics is more than an error in reasoning. The additional motivation lies at the emotional, psychological, or ideological level. It is these other factors that motivate what can be called the Denial of the Singularity.
That motivation, in turn, predisposes the critics to jump to the wrong conclusions when they listen to arguments about the Singularity.
The denial of death
Part of the human condition is a deep-rooted fear of our own extinction – an overwhelming apprehension regarding the end of our existence.
As explored by many writers, including, famously, in the 1974 Pulitzer Prize winning book by Ernest Becker, The Denial of Death, we humans construct large edifices of culture, religion, philosophy, and art, at least in part to numb the existential dread of our own forthcoming personal annihilation.
As it is with our personal extinction, so also it is with the potential extinction of the entire human species – or with the potential diminution of the importance of humanity, as we might be displaced by the emergence of an artificial superintelligence into a position of secondary or minor importance.
At least some of the negativity shown by Singularity critics toward the concept of the Singularity seems to reflect that fundamental fear – especially if these critics cannot perceive credible routes whereby humanity is uplifted by superintelligence, rather than being subjugated by it.
Therefore critics look for ways to deny that humanity might be superseded, or rendered basically irrelevant, or annihilated. They want to cry out, “It’s all nonsense”, and “We need to change the subject”.
That anxiety predisposes them to look favourably on any arguments that appear to show that
- AI is systematically over-hyped; it’s not just individual products that are over-hyped but the entire field of AI is much less capable than enthusiasts like to claim
- Or, that AI is just like any other technology, to which humanity has learned to adapt, and take in our stride. It’s not really going to change our condition much
- Or, that there have already been several previous singularities, such as the invention of speech, the invention of writing, the invention of printing, and the invention of the steam engine, and humanity has survived all these, so we’ll survive AI too,
- Or, that there will be easy ways to control an AI, such as simply removing the power cord – though good luck in switching off the entire Internet!
These arguments are all flawed. It’s important to take the time to explore both the strengths and the weaknesses of such arguments. But it’s also important to recognise the psychological factors that are driving critics to clutch onto such arguments.
How special is the human mind?
What many critics probably want to hear, or to believe, is that the human mind is so special and unique that AI could never match its core attributes. The human mind cannot be reduced to calculations or computations, or to mechanisms, or indeed to raw physics.
These critics want to hear, or to believe, that in some sense the human mind can transcend the brain, and can survive the decay and destruction of the brain. They want to hear, or believe, that there’s a substantial soul as well as a transient mind.
That provides a kind of hope that their souls could, perhaps, maybe, in a meaningful way survive the deaths of their physical bodies.
This line of thinking is rarely spelt out in such an explicit manner, but it seems to be present in the backs of some people’s minds. It could be part of a religious faith that is consciously chosen. Or it could be an indirect hangover from a previous religious faith, that used to be held personally, or held by others in the community, and which is no longer proclaimed or declared, but which continues to cast an influence.
That influence can make people hostile to the singularitarian suggestion that a future AGI will indeed out-perform human minds, not just in narrow fields of intelligence, but in general cognition and awareness.
The denial of the singularity, therefore, arises at least in part from a deep fear of the extinction or subordination of humanity, and from a deep grasping for the human mind to be incapable of being matched by artificial mechanisms.
To overcome this resistance, it’s important to address these psychological characteristics. This requires more than careful rational arguments. This also requires a credible, engaging, positive vision.
A credible positive vision
Many people who grasp the potential significance of the rise of AGI are nevertheless deeply worried that there is no way to avoid a catastrophic outcome. Therefore they jump through mental hoops to justify putting the whole subject out of their mind. However, they can regain their courage and confidence by reviewing the likely impact of the Singularity Principles.
These principles provide the beginning of a credible, engaging vision for how fast-improving AI can be steered to an outcome that will be profoundly positive for humanity.
That vision also highlights how the human mind, with the support of artificial intelligence and other technologies, can be lifted to higher levels of transcendence, vitality, and consciousness, than ever before.
Of course, it’s often dangerous when arguments are won on account of emotional appeal, or because an attractive vision is presented.
The argument needs to stand up rationally and objectively. We need to be able to assess the pros and cons of such arguments without our reasoning being subverted by fears and desires.
The Vital Syllabus educational project has a number of areas that can help here:
- Learning how to learn, of which learning how to unlearn is a critical skill,
- Collaboration, in which the mental shortcomings of each of us as individuals can be addressed through collective intelligence and the wise design of teams,
- Augmentation, in which technologies help to free our minds from cognitive biases,
- Emotional Health, in which we can learn to overcome emotional pressures which destabilise our thinking processes.
The Vital Syllabus project is collecting material that can assist students of all ages – so that all of us can think more clearly and more effectively.
One area of debate where rationality is under particular stress is that of timescales – namely, how urgent is the task of significantly improving the anticipation and management of radically disruptive technology? That question, of urgency, is addressed in the next chapter.
Note: The following video from the Vital Syllabus contains a visual illustration of the Denial of the Singularity. (Some aspects of the description have evolved since the video was originally recorded.)