Denying the Singularity

Bad reasons to deny the Singularity

The Singularity Shadow provides part of the explanation to why more people don’t pay more serious attention to the remarkable prospect of the emergence of Artificial Superintelligence, ASI, also known as the Technological Singularity.

People we can call Singularity critics – people who are uncomfortable with the idea that an ASI might arise and totally transform the human condition, in ways that could be profoundly positive but could also be deeply destructive – these Singularity critics latch on to attitudes or statements from within the Singularity Shadow.

These statements or attitudes include: Singularity Timescale Determinism, Singularity Outcome Determinism, Singularity Hyping, Singularity Risk Complacency, Singularity Term Overloading, Singularity Anti-Regulation Fundamentalism, and Singularity Preoccupation.

The Singularity critics say, or think, perhaps subconsciously, that if Singularity enthusiasts make these kinds of mistakes, then the whole idea of the Singularity can be ignored, or deprioritised.

Now that’s a bad error of reasoning. The arguments for taking the Singularity seriously – the arguments in the Singularitarian Stance – hold up strongly, separately from the unfortunate and unhelpful confusion that is introduced by these shadow statements or attitudes.

But what motivates at least some of the Singularity critics is more than an error in reasoning. The additional motivation lies at the emotional, psychological, or ideological level. It is these other factors that motivate what can be called the Denial of the Singularity.

That motivation, in turn, predisposes the critics to jump to the wrong conclusions when they listen to arguments about the Singularity.

The denial of death

Part of the human condition is a deep-rooted fear of our own extinction – an overwhelming apprehension regarding the end of our existence.

As explored by many writers, including, famously, in the 1974 Pulitzer Prize winning book by Ernest Becker, The Denial of Death, we humans construct large edifices of culture, religion, philosophy, and art, at least in part to numb the existential dread of our own forthcoming personal annihilation.

As it is with our personal extinction, so also it is with the potential extinction of the entire human species – or with the potential diminution of the importance of humanity, as we might be displaced by the emergence of an Artificial Superintelligence into a position of secondary or minor importance.

At least some of the negativity shown by Singularity critics towards the concept of the Singularity likely reflects that fundamental fear – especially if these critics cannot perceive credible routes whereby humanity is uplifted by superintelligence, rather than being subjugated by it.

Therefore critics look for ways to deny that humanity might be superseded, or rendered basically irrelevant, or annihilated. They want to cry out, “It’s all nonsense”.

That anxiety predisposes them to look favourably on any arguments that appear to show that

  • AI is systematically over-hyped; it’s not just individual products that are over-hyped but the entire field of AI is much less capable than enthusiasts like to claim
  • Or, that AI is just like any other technology, to which humanity has learned to adapt, and take in our stride. It’s not really going to change our condition much
  • Or, that there have already been several previous singularities, such as the invention of speech, the invention of writing, the invention of printing, and the invention of the steam engine, and humanity has survived all these, so we’ll survive AI too,
  • Or, that there will be easy ways to control an AI, such as simply removing the power cord – though good luck in switching off the entire Internet!

These arguments are all flawed. It’s important to take the time to explore both the strengths and the weaknesses of such arguments. But it’s also important to recognise the psychological factors that are driving critics to clutch onto such arguments.

How special is the human mind?

What many critics probably want to hear, or to believe, is that the human mind is so special and unique that AI could never match its core attributes. The human mind cannot be reduced to calculations or computations, or to mechanisms, or indeed to raw physics.

These critics want to hear, or to believe, that in some sense the human mind can transcend the brain, and can survive the decay and destruction of the brain. They want to hear, or believe, that there’s a substantial soul as well as a transient mind.

That provides a kind of hope that their souls could, perhaps, maybe, in a meaningful way survive the deaths of their physical bodies.

This line of thinking is rarely spelt out in such a clear, explicit manner, but it seems to be present in the backs of some people’s minds. It could be part of a religious faith that is consciously chosen. Or it could be an indirect hangover from a previous religious faith, that used to be held personally, or held by others in the community, and which is no longer proclaimed or declared, but which continues to cast an influence.

That influence can make people hostile to the singularitarian suggestion that a future ASI will indeed out-perform human minds, not just in narrow fields of intelligence, but in general cognition and awareness.

The denial of the singularity, therefore, arises at least in part from a deep fear of the extinction or subordination of humanity, and from a deep grasping for the human mind to be incapable of being matched by artificial mechanisms.

To overcome this resistance, it’s important to address these psychological characteristics.

A credible positive vision

Happily, the Singularity Principles highlight the beginning of a credible, engaging vision for how ASI can be steered to an outcome that will be profoundly positive for humanity.

That vision also highlights how the human mind, with the support of artificial intelligence and other technologies, can be lifted to higher levels of transcendence, vitality, and consciousness, than ever before.

Of course, it’s often dangerous when arguments are won on account of emotional appeal, or because an attractive vision is presented.

The argument needs to stand up rationally and objectively. We need to be able to assess the pros and cons of such arguments without our reasoning being subverted by our fears and desires.

The Vital Syllabus educational project has a number of areas that can help here:

  • Learning how to learn, of which learning how to unlearn is a critical skill,
  • Collaboration, in which the mental shortcomings of each of us as individuals can be addressed through collective intelligence and the wise design of teams,
  • Augmentation, in which technologies help to free our minds from cognitive biases,
  • Emotional Health, in which we can learn to overcome emotional pressures which destabilise our thinking processes.

Note: The following video from the Vital Syllabus contains a visual illustration of the Denial of the Singularity. (Some aspects of the description have evolved since the video was originally recorded.)

Recent Posts

RAFT 2035 – a new initiative for a new decade

The need for a better politics is more pressing than ever.

Since its formation, Transpolitica has run a number of different projects aimed at building momentum behind a technoprogressive vision for a better politics. For a new decade, it’s time to take a different approach, to build on previous initiatives.

The planned new vehicle has the name “RAFT 2035”.

RAFT is an acronym:

  • Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

RAFT is also a metaphor. Here’s a copy of the explanation:

When turbulent waters are bearing down fast, it’s very helpful to have a sturdy raft at hand.

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities – enormous opportunities and enormous risks:…

Rapid technological change tends to provoke a turbulent social reaction. Old certainties fade. New winners arrive on the scene, flaunting their power, and upturning previous networks of relationships. Within the general public, a sense of alienation and disruption mingles with a sense of profound possibility. Fear and hope jostle each other. Whilst some social metrics indicate major progress, others indicate major setbacks. The claim “You’ve never had it so good” coexists with the counterclaim “It’s going to be worse than ever”. To add to the bewilderment, there seems to be lots of evidence confirming both views.

The greater the pace of change, the more intense the dislocation. Due to the increased scale, speed, and global nature of the ongoing NBIC revolutions, the disruptions that followed in the wake of previous industrial revolutions – seismic though they were – are likely to be dwarfed in comparison to what lies ahead.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In short, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

The words just quoted come from the opening page of a short book that is envisioned to be published in January 2020. The chapters of this book are reworked versions of the scripts used in the recent “Technoprogressive roadmap” series of videos.

Over the next couple of weeks, all the chapters of this proposed book will be made available for review and comment:

  • As pages on the Transpolitica website, starting here
  • As shared Google documents, starting here, where comments and suggestions are welcome.

RAFT Cover 21

All being well, RAFT 2035 will also become a conference, held sometime around the middle of 2020.

You may note that, in that way that RAFT 2035 is presented to the world,

  • The word “transhumanist” has moved into the background – since that word tends to provoke many hostile reactions
  • The word “technoprogressive” also takes a backseat – since, again, that word has negative connotations in at least some circles.

If you like the basic idea of what’s being proposed, here’s how you can help:

  • Read some of the content that is already available, and provide comments
    • If you notice something that seems mistaken, or difficult to understand
    • If you think there is a gap that should be addressed
    • If you think there’s a better way to express something.

Thanks in anticipation!

  1. A reliability index for politicians? 2 Replies
  2. Technoprogressive Roadmap conf call Leave a reply
  3. Transpolitica and the TPUK Leave a reply
  4. There’s more to democracy than voting Leave a reply
  5. Superdemocracy: issues and opportunities Leave a reply
  6. New complete book awaiting reader reviews Leave a reply
  7. Q4 update: Progress towards “Sustainable superabundance” Leave a reply
  8. Q3 sprint: launch the Abundance Manifesto Leave a reply
  9. Q2 sprint: Political responses to technological unemployment Leave a reply