To AGI or not AGI?

To AGI or not AGI?

As we approach the conclusion of this review of the Singularity Principles, this question demands an answer:

Should all attempts to move beyond narrow AI systems to AGI (artificial general intelligence) be resisted?

After all, despite the many good ideas in the Singularity Principles, there is no guarantee that these ideas will prove successful.

Despite our best endeavours:

  • AGI might emerge with a set of fundamental values that are misaligned with human flourishing
  • AGI is likely to withstand any attempts by humans to control it.

The question “To AGI or not AGI?” splits into three:

  1. The feasibility of coordinating worldwide action
  2. The desirability of preventing the creation of AGI
  3. The clarity of a dividing line between AI and AGI.

Global action against the creation of AGI?

Any attempts to prevent the creation of AGI would need to apply worldwide. Otherwise, whilst some countries might abstain from AGI, other countries could proceed with projects that, intentionally or otherwise, give rise to AGI.

Is such a system of worldwide control conceivable?

The Singularity Principles themselves conceive of such a system:

  • Adherence to a set of principles will initially be agreed in a subset of countries
  • Countries outside the agreement will be subject to economic sanctions and other restrictions, in the event that technology projects inside these countries fail to comply
  • The set of countries that accept the principles will eventually extend to the entire world.

However, the Singularity Principles don’t require the prevention of the creation of AGI. They only require the acceptance of framework conditions, which are designed to increase the likelihood of AGI being profoundly beneficial for humanity.

Compared to any blanket ban, such framework conditions would be easier for countries to accept. That framework doesn’t rule out the creation of AGI. It just rules out various dangerous developmental shortcuts.

However, if there is indeed an argument that, despite all the recommendations and regulations of the Singularity Principles, AGI remains too risky, then such an argument could be shared worldwide. Such an argument may have the potential to transcend different political ideologies, economic systems, and national perspectives.

Accordingly, it appears to be credible that the world might collectively commit not to create AGI. But that presupposes that a sufficiently compelling argument is raised in support of the desirability of that outcome.

Does such an argument exist, and does it stack up?

Assessing the desirability of AGI

The reasons, on the contrary, to want to create AGI can be summarised as follows:

  1. Sheer curiosity: is AGI something that is actually possible?
  2. A desire to learn more about human minds by comparison with whatever kind of mind an AGI possesses
  3. A desire to take advantage of the additional intelligence of an AGI to solve problems that are currently beyond human capability.

The third of these reasons matches what DeepMind founder Demis Hassabis has said on a number of occasions to be the purpose of his company’s work with AI:

Solve intelligence, and then use that to solve everything else…

Cancer, climate change, energy, genomics, macroeconomics, financial systems, physics: many of the systems we would like to master are getting so complex. There’s such an information overload that it’s becoming difficult for even the smartest humans to master it in their lifetimes. How do we sift through this deluge of data to find the right insights? One way of thinking of AGI is as a process that will automatically convert unstructured information into actionable knowledge. What we’re working on is potentially a meta-solution to any problem.

Any decision to prevent the creation of AGI will need to reconcile with abandoning these specific aspirations. For many people, that will be a deeply unpopular decision. They will say that AGI represents humanity’s best hope for solving problems such as the ones Demis Hassabis lists.

Here’s what could make that decision more palatable: the belief that solutions to these problems could be attained using lesser technological innovations, such as nanotechnology and narrow AI.

We don’t yet know whether these lesser technologies will be sufficient to solve cancer, climate change, and so on. But that possibility might become clearer in the years ahead.

In such a case, the world might decide as follows:

  • Allow constrained, limited investigations of AI, and into other technologies
  • Prevent any developments that could allow AGI to emerge.

A dividing line between AI and AGI?

A decision like the one just suggested would face its own practical difficulties. It’s not just the question of global enforcement, addressed earlier in this chapter. It’s also the question of knowing which developments “could allow AGI to emerge”.

The crux of the uncertainty here is our lack of agreement on which existing trends in AI development might lead, unexpectedly, to the kind of general AI which introduces new problems of alignment and control.

For example, consider the question as to whether scaling up existing methods of deep reinforcement learning will be sufficient to reach AGI. Four researchers from DeepMind submitted an article in June 2021 to the peer reviewed journal Artificial Intelligence entitled “Reward is enough”. Here’s the abstract:

In this article we hypothesise that intelligence, and its associated abilities, can be understood as subserving the maximisation of reward. Accordingly, reward is enough to drive behaviour that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalisation and imitation. This is in contrast to the view that specialised problem formulations are needed for each ability, based on other signals or objectives. Furthermore, we suggest that agents that learn through trial and error experience to maximise reward could learn behaviour that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence.

That article represents a minority view within the community of AI researchers. However, it seems rash to completely rule out the hypothesis that it makes. As the renowned blogger Scott Alexander has recently argued in an article in Astral Codex Ten, it’s by no means inconceivable that increased “AI size” will “solve [all the] flubs [mistakes]” of present-day AI systems.

If humanity wishes to remain in safety, it would therefore need to restrict attempts to scale up existing deep learning reinforcements systems. Again, this policy would bring its own problems, since it’s unclear which types of modifications to existing systems might be sufficient to raise the effective scale of an AI system and thereby tip it over the threshold into AGI.

It’s the same with almost every other current project to improve aspects of today’s AI. There’s always a risk that such improvements would bring AGI closer. Could all these projects be halted? That seems extremely unlikely.

A practical proposal

Given the severe difficulties facing any attempt at a blanket ban on research that could lead to AGI, the following approach appears more likely to win wider support:

  • Mandate that the effort and resources placed into improving the capabilities of AI should be matched by effort and resources placed into improving the safety and beneficence of AI.

Rather than trying to slow down the creation of AGI, this proposal urges a dramatic speed up of work on the methods to steer AGI in positive directions.

For example, research needs to be mandated into:

  • Methods to boost alignment and/or control that would be applicable to any powerful AI system
  • The most effective ways to raise public awareness and understanding of the risks and opportunities of AGI
  • The most useful ways to involve politicians and legislators in these discussions
  • How to identify in advance likely thresholds between narrow AI and AGI
  • How to detect, reliably, when an AI system is approaching any such threshold – before such a transition becomes unstoppable
  • How to be able to pivot sharply in any case when an AI system is approaching the AGI threshold.

The question of how to measure progress toward AGI is the subject of the next chapter.

Recent Posts

RAFT 2035 – a new initiative for a new decade

The need for a better politics is more pressing than ever.

Since its formation, Transpolitica has run a number of different projects aimed at building momentum behind a technoprogressive vision for a better politics. For a new decade, it’s time to take a different approach, to build on previous initiatives.

The planned new vehicle has the name “RAFT 2035”.

RAFT is an acronym:

  • Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

RAFT is also a metaphor. Here’s a copy of the explanation:

When turbulent waters are bearing down fast, it’s very helpful to have a sturdy raft at hand.

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities – enormous opportunities and enormous risks:…

Rapid technological change tends to provoke a turbulent social reaction. Old certainties fade. New winners arrive on the scene, flaunting their power, and upturning previous networks of relationships. Within the general public, a sense of alienation and disruption mingles with a sense of profound possibility. Fear and hope jostle each other. Whilst some social metrics indicate major progress, others indicate major setbacks. The claim “You’ve never had it so good” coexists with the counterclaim “It’s going to be worse than ever”. To add to the bewilderment, there seems to be lots of evidence confirming both views.

The greater the pace of change, the more intense the dislocation. Due to the increased scale, speed, and global nature of the ongoing NBIC revolutions, the disruptions that followed in the wake of previous industrial revolutions – seismic though they were – are likely to be dwarfed in comparison to what lies ahead.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In short, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

The words just quoted come from the opening page of a short book that is envisioned to be published in January 2020. The chapters of this book are reworked versions of the scripts used in the recent “Technoprogressive roadmap” series of videos.

Over the next couple of weeks, all the chapters of this proposed book will be made available for review and comment:

  • As pages on the Transpolitica website, starting here
  • As shared Google documents, starting here, where comments and suggestions are welcome.

RAFT Cover 21

All being well, RAFT 2035 will also become a conference, held sometime around the middle of 2020.

You may note that, in that way that RAFT 2035 is presented to the world,

  • The word “transhumanist” has moved into the background – since that word tends to provoke many hostile reactions
  • The word “technoprogressive” also takes a backseat – since, again, that word has negative connotations in at least some circles.

If you like the basic idea of what’s being proposed, here’s how you can help:

  • Read some of the content that is already available, and provide comments
    • If you notice something that seems mistaken, or difficult to understand
    • If you think there is a gap that should be addressed
    • If you think there’s a better way to express something.

Thanks in anticipation!

  1. A reliability index for politicians? 2 Replies
  2. Technoprogressive Roadmap conf call Leave a reply
  3. Transpolitica and the TPUK Leave a reply
  4. There’s more to democracy than voting Leave a reply
  5. Superdemocracy: issues and opportunities Leave a reply
  6. New complete book awaiting reader reviews Leave a reply
  7. Q4 update: Progress towards “Sustainable superabundance” Leave a reply
  8. Q3 sprint: launch the Abundance Manifesto Leave a reply
  9. Q2 sprint: Political responses to technological unemployment Leave a reply