To AGI or not AGI?
As we approach the conclusion of this review of the Singularity Principles, this question demands an answer:
Should all attempts to move beyond narrow AI systems to AGI (artificial general intelligence) be resisted?
After all, despite the many good ideas in the Singularity Principles, there is no guarantee that these ideas will prove successful.
Despite our best endeavours:
- AGI might emerge with a set of fundamental values that are misaligned with human flourishing
- AGI is likely to withstand any attempts by humans to control it.
The question “To AGI or not AGI?” splits into three:
- The feasibility of coordinating worldwide action
- The desirability of preventing the creation of AGI
- The clarity of a dividing line between AI and AGI.
Global action against the creation of AGI?
Any attempts to prevent the creation of AGI would need to apply worldwide. Otherwise, whilst some countries might abstain from AGI, other countries could proceed with projects that, intentionally or otherwise, give rise to AGI.
Is such a system of worldwide control conceivable?
The Singularity Principles themselves conceive of such a system:
- Adherence to a set of principles will initially be agreed in a subset of countries
- Countries outside the agreement will be subject to economic sanctions and other restrictions, in the event that technology projects inside these countries fail to comply
- The set of countries that accept the principles will eventually extend to the entire world.
However, the Singularity Principles don’t require the prevention of the creation of AGI. They only require the acceptance of framework conditions, which are designed to increase the likelihood of AGI being profoundly beneficial for humanity.
Compared to any blanket ban, such framework conditions would be easier for countries to accept. That framework doesn’t rule out the creation of AGI. It just rules out various dangerous developmental shortcuts.
However, if there is indeed an argument that, despite all the recommendations and regulations of the Singularity Principles, AGI remains too risky, then such an argument could be shared worldwide. Such an argument may have the potential to transcend different political ideologies, economic systems, and national perspectives.
Accordingly, it appears to be credible that the world might collectively commit not to create AGI. But that presupposes that a sufficiently compelling argument is raised in support of the desirability of that outcome.
Does such an argument exist, and does it stack up?
Assessing the desirability of AGI
The reasons, on the contrary, to want to create AGI can be summarised as follows:
- Sheer curiosity: is AGI something that is actually possible?
- A desire to learn more about human minds by comparison with whatever kind of mind an AGI possesses
- A desire to take advantage of the additional intelligence of an AGI to solve problems that are currently beyond human capability.
The third of these reasons matches what DeepMind founder Demis Hassabis has said on a number of occasions to be the purpose of his company’s work with AI:
Solve intelligence, and then use that to solve everything else…
Cancer, climate change, energy, genomics, macroeconomics, financial systems, physics: many of the systems we would like to master are getting so complex. There’s such an information overload that it’s becoming difficult for even the smartest humans to master it in their lifetimes. How do we sift through this deluge of data to find the right insights? One way of thinking of AGI is as a process that will automatically convert unstructured information into actionable knowledge. What we’re working on is potentially a meta-solution to any problem.
Any decision to prevent the creation of AGI will need to reconcile with abandoning these specific aspirations. For many people, that will be a deeply unpopular decision. They will say that AGI represents humanity’s best hope for solving problems such as the ones Demis Hassabis lists.
Here’s what could make that decision more palatable: the belief that solutions to these problems could be attained using lesser technological innovations, such as nanotechnology and narrow AI.
We don’t yet know whether these lesser technologies will be sufficient to solve cancer, climate change, and so on. But that possibility might become clearer in the years ahead.
In such a case, the world might decide as follows:
- Allow constrained, limited investigations of AI, and into other technologies
- Prevent any developments that could allow AGI to emerge.
A dividing line between AI and AGI?
A decision like the one just suggested would face its own practical difficulties. It’s not just the question of global enforcement, addressed earlier in this chapter. It’s also the question of knowing which developments “could allow AGI to emerge”.
The crux of the uncertainty here is our lack of agreement on which existing trends in AI development might lead, unexpectedly, to the kind of general AI which introduces new problems of alignment and control.
For example, consider the question as to whether scaling up existing methods of deep reinforcement learning will be sufficient to reach AGI. Four researchers from DeepMind submitted an article in June 2021 to the peer reviewed journal Artificial Intelligence entitled “Reward is enough”. Here’s the abstract:
In this article we hypothesise that intelligence, and its associated abilities, can be understood as subserving the maximisation of reward. Accordingly, reward is enough to drive behaviour that exhibits abilities studied in natural and artificial intelligence, including knowledge, learning, perception, social intelligence, language, generalisation and imitation. This is in contrast to the view that specialised problem formulations are needed for each ability, based on other signals or objectives. Furthermore, we suggest that agents that learn through trial and error experience to maximise reward could learn behaviour that exhibits most if not all of these abilities, and therefore that powerful reinforcement learning agents could constitute a solution to artificial general intelligence.
That article represents a minority view within the community of AI researchers. However, it seems rash to completely rule out the hypothesis that it makes. As the renowned blogger Scott Alexander has recently argued in an article in Astral Codex Ten, it’s by no means inconceivable that increased “AI size” will “solve [all the] flubs [mistakes]” of present-day AI systems.
If humanity wishes to remain in safety, it would therefore need to restrict attempts to scale up existing deep learning reinforcements systems. Again, this policy would bring its own problems, since it’s unclear which types of modifications to existing systems might be sufficient to raise the effective scale of an AI system and thereby tip it over the threshold into AGI.
It’s the same with almost every other current project to improve aspects of today’s AI. There’s always a risk that such improvements would bring AGI closer. Could all these projects be halted? That seems extremely unlikely.
A practical proposal
Given the severe difficulties facing any attempt at a blanket ban on research that could lead to AGI, the following approach appears more likely to win wider support:
- Mandate that the effort and resources placed into improving the capabilities of AI should be matched by effort and resources placed into improving the safety and beneficence of AI.
Rather than trying to slow down the creation of AGI, this proposal urges a dramatic speed up of work on the methods to steer AGI in positive directions.
For example, research needs to be mandated into:
- Methods to boost alignment and/or control that would be applicable to any powerful AI system
- The most effective ways to raise public awareness and understanding of the risks and opportunities of AGI
- The most useful ways to involve politicians and legislators in these discussions
- How to identify in advance likely thresholds between narrow AI and AGI
- How to detect, reliably, when an AI system is approaching any such threshold – before such a transition becomes unstoppable
- How to be able to pivot sharply in any case when an AI system is approaching the AGI threshold.
The question of how to measure progress toward AGI is the subject of the next chapter.