Extended table of contents

Table of contents

This page lists the main section headings for all the chapters in this book.

0. Preface

  • Confusion, turbulence, and peril
  • This time it’s different
  • The scope of the Principles
  • Collective insight
  • The short form of the Principles
  • The four areas covered by the Principles
  • What lies ahead

1. Background: Ten essential observations

  • Tech breakthroughs are unpredictable (both timing and impact)
  • Potential complex interactions make prediction even harder
  • Changes in human attributes complicate tech changes
  • Greater tech power enables more devastating results
  • Different perspectives assess “good” vs. “bad” differently
  • Competition can be hazardous as well as beneficial
  • Some tech failures would be too drastic to allow recovery
  • A history of good results is no guarantee of future success
  • It’s insufficient to rely on good intentions
  • Wishful thinking predisposes blindness to problems

2. Fast-changing technologies: risks and benefits

  • Technology risk factors
  • Prioritising benefits?
  • What about ethics?
  • The transhumanist stance

2.1 Special complications with artificial intelligence

  • Problems with training data
  • The black box nature of AI
  • Interactions between multiple algorithms
  • Self-improving AI
  • Devious AI
  • Four catastrophic error modes
  • The broader perspective

2.2 The AI Control Problem

  • The gorilla problem
  • Examples of dangers with uncontrollable AI
  • Proposed solutions (which don’t work)
  • The impossibility of full verification
  • Emotion misses the point
  • No off switch
  • The ineffectiveness of tripwires
  • Escaping from confinement
  • The ineffectiveness of restrictions
  • No automatic super ethics
  • Issues with hard-wiring ethical principles

2.3 The AI Alignment Problem

  • Asimov’s Three Laws
  • Ethical dilemmas and trade-offs
  • Problems with proxies
  • The gaming of proxies
  • Simple examples of profound problems
  • Humans disagree
  • No automatic super ethics (again)
  • Other options for answers?

2.4 No easy solutions

  • No guarantees from the free market
  • No guarantees from cosmic destiny
  • Planet B?
  • Humans merging with AI?
  • Approaching the Singularity

3. What is the Singularity?

  • Breaking down the definition
  • Four alternative definitions
  • Four possible routes to the Singularity
  • The Singularity and AI self-awareness
  • Singularity timescales
  • Positive and negative singularities
  • Tripwires and canary signals
  • Moving forward

3.1 The Singularitarian Stance

  • AGI is possible
  • AGI could happen within just a few decades
  • Winner takes all
  • The difficulty of controlling AGI
  • Superintelligence and superethics
  • Not the Terminator
  • Recap
  • Opposition to the Singularitarian Stance

3.2 A complication: the Singularity Shadow

  • Singularity timescale determinism
  • Singularity outcome determinism
  • Singularity hyping
  • Singularity risk complacency
  • Singularity term overloading
  • Singularity anti-regulation fundamentalism
  • Singularity preoccupation
  • Looking forward

3.3 Bad reasons to deny the Singularity

  • The denial of death
  • How special is the human mind?
  • A credible positive vision

4. The question of urgency

  • Factors causing AI to improve
  • 15 options on the table
  • The difficulty of measuring progress
  • Learning from Christopher Columbus
  • The possibility of fast take-off

5. The Singularity Principles in depth

5.1 Analysing goals and potential outcomes

  • Question desirability
  • Clarify externalities
  • Require peer reviews
  • Involve multiple perspectives
  • Analyse the whole system
  • Anticipate fat tails

5.2 Desirable characteristics of technological solutions

  • Reject opacity
  • Promote resilience
  • Promote verifiability
  • Promote auditability
  • Clarify risks to users
  • Clarify trade-offs

5.3 Ensuring development takes place responsibly

  • Insist on accountability
  • Penalise disinformation
  • Design for cooperation
  • Analyse via simulations
  • Maintain human oversight

5.4 Evolution and enforcement

  • Build consensus regarding principles
  • Provide incentives to address omissions
  • Halt development if principles are not upheld
  • Consolidate progress via legal frameworks

6. Key success factors

  • Public understanding
  • Persistent urgency
  • Reliable action against noncompliance
  • Public funding
  • International support
  • A sense of inclusion and collaboration

7. Questions arising

7.1 Measuring human flourishing

  • Some example trade-offs
  • Updating the Universal Declaration of Human Rights
  • Constructing an Index of Human and Social Flourishing

7.2 Trustable monitoring

  • Moore’s Law of Mad Scientists
  • Four projects to reduce the dangers of WMDs
  • Detecting mavericks
  • Examples of trustable monitoring
  • Watching the watchers

7.3 Uplifting politics

  • Uplifting regulators
  • The central role of politics
  • Toward superdemocracy
  • Technology improving politics
  • Transcending party politics
  • The prospects for political progress

7.4 Uplifting education

  • Top level areas of the Vital Syllabus
  • Improving the Vital Syllabus

7.5 To AGI or not AGI?

  • Global action against the creation of AGI?
  • Possible alternatives to AGI?
  • A dividing line between AI and AGI?
  • A practical proposal

7.6 Measuring progress toward AGI

  • Aggregating expert opinions
  • Metaculus predictions
  • Alternative canary signals for AGI
  • AI index reports

7.7. Growing a coalition of the willing

  • Risks and actions

Recent Posts

RAFT 2035 – a new initiative for a new decade

The need for a better politics is more pressing than ever.

Since its formation, Transpolitica has run a number of different projects aimed at building momentum behind a technoprogressive vision for a better politics. For a new decade, it’s time to take a different approach, to build on previous initiatives.

The planned new vehicle has the name “RAFT 2035”.

RAFT is an acronym:

  • Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

RAFT is also a metaphor. Here’s a copy of the explanation:

When turbulent waters are bearing down fast, it’s very helpful to have a sturdy raft at hand.

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities – enormous opportunities and enormous risks:…

Rapid technological change tends to provoke a turbulent social reaction. Old certainties fade. New winners arrive on the scene, flaunting their power, and upturning previous networks of relationships. Within the general public, a sense of alienation and disruption mingles with a sense of profound possibility. Fear and hope jostle each other. Whilst some social metrics indicate major progress, others indicate major setbacks. The claim “You’ve never had it so good” coexists with the counterclaim “It’s going to be worse than ever”. To add to the bewilderment, there seems to be lots of evidence confirming both views.

The greater the pace of change, the more intense the dislocation. Due to the increased scale, speed, and global nature of the ongoing NBIC revolutions, the disruptions that followed in the wake of previous industrial revolutions – seismic though they were – are likely to be dwarfed in comparison to what lies ahead.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In short, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

The words just quoted come from the opening page of a short book that is envisioned to be published in January 2020. The chapters of this book are reworked versions of the scripts used in the recent “Technoprogressive roadmap” series of videos.

Over the next couple of weeks, all the chapters of this proposed book will be made available for review and comment:

  • As pages on the Transpolitica website, starting here
  • As shared Google documents, starting here, where comments and suggestions are welcome.

RAFT Cover 21

All being well, RAFT 2035 will also become a conference, held sometime around the middle of 2020.

You may note that, in that way that RAFT 2035 is presented to the world,

  • The word “transhumanist” has moved into the background – since that word tends to provoke many hostile reactions
  • The word “technoprogressive” also takes a backseat – since, again, that word has negative connotations in at least some circles.

If you like the basic idea of what’s being proposed, here’s how you can help:

  • Read some of the content that is already available, and provide comments
    • If you notice something that seems mistaken, or difficult to understand
    • If you think there is a gap that should be addressed
    • If you think there’s a better way to express something.

Thanks in anticipation!

  1. A reliability index for politicians? 2 Replies
  2. Technoprogressive Roadmap conf call Leave a reply
  3. Transpolitica and the TPUK Leave a reply
  4. There’s more to democracy than voting Leave a reply
  5. Superdemocracy: issues and opportunities Leave a reply
  6. New complete book awaiting reader reviews Leave a reply
  7. Q4 update: Progress towards “Sustainable superabundance” Leave a reply
  8. Q3 sprint: launch the Abundance Manifesto Leave a reply
  9. Q2 sprint: Political responses to technological unemployment Leave a reply