The Singularity Principles: Preface
This book is dedicated to what may be the most important concept in human history, namely, the Singularity – what it is, what it is not, the steps by which we may reach it, and, crucially, how to make it more likely that we’ll experience a positive singularity rather than a negative singularity.
For now, here’s a simple definition. The Singularity is the emergence of Artificial General Intelligence (AGI), and the associated transformation of the human condition. Spoiler alert: that transformation will be profound. But if we’re not paying attention, it’s likely to be profoundly bad.
Despite the importance of the concept of the Singularity, the subject receives nothing like the attention it deserves. When it is discussed, it often receives scorn or ridicule. Alas, you’ll hear sniggers and see eyes rolling.
That’s because, as I’ll explain, there’s a kind of shadow around the concept – an unhelpful set of distortions that make it harder for people to fully perceive the real opportunities and the real risks that the Singularity brings.
These distortions grow out of a wider confusion – confusion about the complex interplay of forces that are leading society to the adoption of ever-more powerful technologies, including ever-more powerful AI.
It’s my task in this book to dispel the confusion, to untangle the distortions, to highlight practical steps forward, and to attract much more serious attention to the Singularity. The future of humanity is at stake.
Let’s start with the confusion.
Confusion, turbulence, and peril
The 2020s could be called the Decade of Confusion. Never before has so much information washed over everyone, leaving us, all too often, overwhelmed, intimidated, and distracted. Former certainties have dimmed. Long-established alliances have fragmented. Flurries of excitement have pivoted quickly to chaos and disappointment. These are turbulent times.
However, if we could see through the confusion, distraction, and intimidation, what we should notice is that human flourishing is, potentially, poised to soar to unprecedented levels. Fast-changing technologies are on the point of providing a string of remarkable benefits. We are near the threshold of radical improvements to health, nutrition, security, creativity, collaboration, intelligence, awareness, and enlightenment – with these improvements being available to everyone.
Unfortunately, these same fast-changing technologies also threaten multiple sorts of disaster. These technologies are two-edged swords. Unless we wield them with great skill, they are likely to spin out of control. If we remain overwhelmed, intimidated, and distracted, our prospects are poor. Accordingly, these are perilous times.
These dual future possibilities – technology-enabled sustainable superabundance, versus technology-induced catastrophe – have featured in numerous discussions that I have chaired at London Futurists meetups going all the way back to March 2008.
As these discussions have progressed, year by year, I have gradually formulated and refined what I now call the Singularity Principles. These principles are intended:
- To steer humanity’s relationships with fast-changing technologies,
- To manage multiple risks of disaster,
- To enable the attainment of remarkable benefits,
- And, thereby, to help humanity approach a profoundly positive singularity.
In short, the Singularity Principles are intended to counter today’s widespread confusion, distraction, and intimidation, by providing clarity, credible grounds for hope, and an urgent call to action.
This time it’s different
I first introduced the Singularity Principles, under that name and with the same general format, in the final chapter, “Singularity”, of my 2021 book Vital Foresight: The Case for Active Transhumanism. That chapter is the culmination of a 642 page book. The preceding sixteen chapters of that book set out at some length the challenges and opportunities that these principles need to address.
Since the publication of Vital Foresight, it has become evident to me that the Singularity Principles require a short, focused book of their own. That’s what you now hold in your hands.
The Singularity Principles is by no means the only new book on the subject of the management of powerful disruptive technologies. The public, thankfully, are waking up to the need to understand these technologies better, and numerous authors are responding to that need. As one example, the phrase “Artificial Intelligence”, forms part of the title of scores of new books.
I have personally learned many things from some of these recent books. However, to speak frankly, I find myself dissatisfied by the prescriptions these authors have advanced. These authors generally fail to appreciate the full extent of the threats and opportunities ahead. And even if they do see the true scale of these issues, the recommendations these authors propose strike me as being inadequate.
Therefore, I cannot keep silent.
Accordingly, I present in this new book the content of the Singularity Principles, brought up to date in the light of recent debates and new insights. The book also covers:
- Why the Singularity Principles are sorely needed
- The source and design of these principles
- The significance of the term “Singularity”
- Why there is so much unhelpful confusion about “the Singularity”
- What’s different about the Singularity Principles, compared to recommendations of other analysts
- The kinds of outcomes expected if these principles are followed
- The kinds of outcomes expected if these principles are not followed
- How you – dear reader – can, and should, become involved, finding your place in a growing coalition
- How these principles are likely to evolve further
- How these principles can be put into practice, all around the world – with the help of people like you.
The scope of the Principles
To start with, the Singularity Principles can and should be applied to the anticipation and management of the NBIC technologies that are at the heart of the current, fourth, industrial revolution. NBIC – nanotech, biotech, infotech, and cognotech – is a quartet of four interlinked technological disruptions which are likely to grow significantly stronger as the 2020s unfold:
- Nanotech can provide resilient new materials, new processes for manufacturing and recycling, new ways to capture and distribute energy, new types of computing hardware, and pervasive new low-cost surveillance networks of all-seeing sensors
- Infotech can draw unexpected inferences from large datasets, leaping over human capabilities in increasing numbers of domains of thought, and displacing greater numbers of human employees from tasks which used to occupy large parts of their paid employment
- Biotech enables the modification not only of nature, but of human nature: it will allow us not only to create new types of lifeforms – synthetic organisms that can outperform those found in nature – but also to edit the human metabolism much more radically than is possible via existing tools such as vaccinations, antibiotics, and occasional organ transplants
- Cognotech allows similar modifications for the human mind, brain, and spirit, conceivably enabling in just a few short weeks the kind of changes in mindset and inner character which previously might have required many years of disciplined practice of yoga, meditation, and/or therapy; it also enables alarming new types of mind control and ego manipulation.
Each of these four technological disruptions has the potential to fundamentally transform large parts of the human experience.
Looking beyond NBIC, the Singularity Principles can and should also be applied to the anticipation and management of the core technology that will likely give rise to a fifth industrial revolution, namely the technology of AGI (artificial general intelligence), and the rapid additional improvements in artificial superintelligence that will likely follow fast on the footsteps of AGI.
Artificial superintelligence will exceed human capabilities, not just in individual fields of mental reasoning, but in all fields of mental reasoning.
The emergence of AGI is known as the technological singularity – or, more briefly, as the Singularity.
In other words, the Singularity Principles apply both:
- To the longer-term lead-up to the Singularity, from today’s fast-improving NBIC technologies,
- And to the shorter-term lead-up to the Singularity, as AI gains more general capabilities.
In both cases, anticipation and management of possible outcomes will be of vital importance.
By the way – in case it’s not already clear – please don’t expect a clever novel piece of technology, or some brilliant technical design, to somehow solve, by itself, the challenges posed by NBIC technologies and AGI. These challenges extend far beyond what could be wrestled into submission by some dazzling mathematical wizardry, by the incorporation of an ingenious new piece of silicon at the heart of every computer, or by any other “quick fix”. Indeed, the considerable effort being invested by some organisations in a search for that kind of fix is, arguably, a distraction from a sober assessment of the bigger picture.
Better technology, better product design, better mathematics, and better hardware can all be part of the full solution. But that full solution also needs, critically, to include aspects of organisational design, economic incentives, legal frameworks, and political oversight. That’s the argument I develop in the chapters ahead.
It has been my privilege and pleasure to be the person who has committed these principles into writing. However, the ideas in this book have benefitted greatly from the collective insight of the London Futurists community, as well as from feedback from people who have read my previous books, commented on my blogposts, or attended some of my speaking engagements.
In particular, I gratefully acknowledge diverse inputs over the years from numerous:
- Scientists, technologists, and engineers
- Entrepreneurs, designers, and artists
- Humanitarians, activists, and lawyers
- Educators, psychologists, and economists
- Philosophers, rationalists, and effective altruists
- Historians, sociologists, and forecasters
- Ethicists, transhumanists, and singularitarians.
I stand on the shoulders of all these contributors.
It is my sincere hope that this kind of collective insight can deepen and accelerate, and that this book can play a vital role in that process.
As a result, the 2020s can transform from the Decade of Confusion into the Decade of Clarity, Choice, and Creativity.
But if that collective insight fails to rise to the occasion, the 2020s and/or the 2030s could become, instead, the Decade of Cataclysm. The forces involved are that strong.
The short form of the Principles
Here’s the short form of the Singularity Principles.
As we develop and interact with increasingly powerful technologies, we should be sure we understand:
- The goals that we’re hoping to accomplish – rather than us merely drifting along in some direction because it sounds nice, or has some alluring features, or it seemed like a good idea the last time that we thought about strategic direction
- What are the products and methods that are most likely to serve these goals well – rather than us persisting with products or methods that happen to make us feel comfortable, or which have given us some good results in the past
- How we will manage the surprises arising en route to our goals – rather than us being caught flat-footed as the victim of inertia or denial, when unexpected signals start showing on our radars.
These are important high-level points. But we need to dig deeper into how to apply them. That’s what’s covered in the pages ahead.
The four areas covered by the Principles
The Singularity Principles split into four areas:
- Methods to analyse the goals and outcomes that may arise from particular technologies
- The characteristics that are highly desirable in technological solutions
- Methods to ensure that development takes place responsibly
- Evolution and enforcement:
- How this overall set of recommendations will evolve further over time
- How to increase the likelihood that these recommendations are applied in practice rather than simply being some kind of wishful thinking.
I’ve given the principles in each of these four areas the following names:
- Analysing goals and potential outcomes:
- Question desirability
- Clarify externalities
- Require peer reviews
- Involve multiple perspectives
- Analyse the whole system
- Anticipate fat tails
- Desirable characteristics of technological solutions:
- Reject opacity
- Promote resilience
- Promote verifiability
- Promote auditability
- Clarify risks to users
- Clarify trade-offs
- Ensuring development takes place responsibly:
- Insist on accountability
- Penalise disinformation
- Design for cooperation
- Analyse via simulations
- Maintain human oversight
- Evolution and enforcement:
- Build consensus regarding principles
- Provide incentives to address omissions
- Halt development if principles not upheld
- Consolidate progress via legal frameworks
That makes 21 principles in total. We’ll spend some time in the middle portion of this book on each of them in turn.
But first, the next few chapters will provide context, to help raise awareness of how and why all of us, in our own ways, should become active supporters of these principles.
What lies ahead
Please view the following chapters for more information:
- Background: Ten essential observations
- Fast-changing technologies: risks and benefits
- What is the Singularity?
- The question of urgency
- The Singularity Principles in depth:
- Key success factors
- Questions arising