9. Abundant intelligence

This page contains Chapter 9 from
Sustainable Superabundance: A universal transhumanist manifesto for the 2020s and beyond

Abundant intelligence

Note: The text of this chapter of the Manifesto is draft and is presently undergoing regular revision.

To make comments or suggest changes in the text, please use this shared document.

9. Abundant intelligence

The sustainable superabundance that potentially lies ahead involves more than just a better environment – clean energy, nutritious food, and ample material goods (the subjects of earlier chapters). It involves more than just the radical enhancement of our bodily health (the subject of the previous chapter). Crucially, it also involves the radical enhancement of our brains, mind, and spirit. In other words, with the right choices and the right actions, we can look forward to a blossoming of all-round intelligence.

This intelligence will reside in three locations. First, our individual brains can be improved, and will work much better than before. Second, artificial intelligence, resident in all kinds of computing hardware, can jump upwards in capability. Third, the aggregate intelligence of whole societies of people can be upgraded, allowing groups to draw on collective insight to solve problems that would previously have defeated them.

None of this will take place automatically. Nor will increases in intelligence necessarily lead to beneficial scenarios, rather than to deadly scenarios. As with all the other spheres of abundance discussed in this Manifesto, the actual outcome will depend critically on choices taken by humanity over the next few years.

A key complication is that merely becoming more intelligent is no guarantee that someone will become wiser. Far from it. Some of the world’s nastiest politicians are evidently highly intelligent; likewise some of the world’s most ruthless criminals. A given quantity of intelligence can be applied in service of any number of different goals – including destructive goals as well as constructive goals. Intelligence can be deployed to confuse and mislead – to cajole and bamboozle. Greater intelligence gives people greater ability to accomplish whatever objectives they have already decided to pursue – and greater ability to find clever justifications to promote those objectives.

Although we humans like to think of ourselves as rational beings, a better description in many cases is that we are rationalising beings. We are expert in finding reasons that support our preexisting choices. As modern online searches place more ever information at our disposal, the easier it becomes for us to discover special cases that appear to back up our own favourite worldviews. As we connect into ever wider online communities, the more we can come across people who seem to share our viewpoints, reassuring us that we are on the right lines. As for evidence that appears to contradict our views, and critics who disagree with us, the online world can provide us with ingenious reasons to disregard that evidence and critics.

True all-round intelligence will rise above such narrow intensity and blinkered reasoning. But reaching this level of intelligence will require a lot more than merely turbo-charging our existing modes of reasoning.

The rise of Artificial Intelligence

Intelligence can be defined as the ability to figure out how to accomplish goals. In simple environments with simple goals – for example, to win in a game of chess, or to find the quickest route between two locations on a map – the intelligence required is “narrow”. For more complex environments and more complex goals, intelligence needs more general capabilities.

Human intelligence involves being able to understand and predict the motion of both animate and inanimate objects. It involves the development of a “theory of mind” – an understanding of the factors that can motivate creatures with minds to change their own beliefs and behaviours. It involves the skill of breaking down a complex task into a series of subtasks. It involves being able to select and accumulate resources that could be of use at later stages. It involves the ability to collect more information, for example by designing and carrying out experiments, in order to take better decisions. It involves being able to learn from setbacks and surprises, rather than merely repeating the same actions over and over.

From the 1940s onward, various aspects of human intelligence have been duplicated in electronic computers – starting from code-breaking and the calculation of missile trajectories. Over the decades, so-called “expert systems” emerged, that could assist humans to carry out all kinds of different decisions.

In more recent times, a disruptive new wave of computer programs called “machine learning” has achieved surprising success, often dramatically surpassing the performance of expert systems. Machine learning software can in effect infer by itself the relationships between various sorts of input and output data. For example, to tell the difference between pictures of cats and pictures of dogs, an expert system would include large numbers of specific rules, entered individually by human programmers, along with information about exceptions to each rule. A machine learning system, in contrast, would be shown lots of pictures of cats and dogs, and would, via a process known as “training”, figure out a set of factors to distinguish the two cases.

Since the operation of successful machine learning involves numerous layers of simple binary decisions that have some elements in common with the on-off firing of neurons in the brain, the names “deep learning” and “neural networks” are commonly used.

Transhumanists anticipate that, just as expert systems have recently been overtaken in capability by the new wave of deep learning, so will deep learning be in turn overtaken in capability by yet new waves of artificial intelligence.

The acceleration of Artificial Intelligence

In the past, progress within Artificial Intelligence (AI) faltered from time to time, in periods known as “AI winters” when funding was withdrawn from research and development into AI. These reductions in funding arose from disappointments with the slow speed of breakthroughs in the field.

Within the last decade, there is no longer any question about the huge commercial impact of AI. The most valuable companies in the world, as measured by their share price capitalisation, are all high-tech companies that can attribute large parts of their success to their prowess in AI.

In field after field, further commercial success awaits companies who can apply new AI techniques to improve their performance. For example, advice from AI can enable financial trading algorithms to work more incisively, and therefore to earn more profits for the owners of these algorithms. Electronic games can attain greater market appeal, and therefore more profits, if they incorporate artificial characters with advanced AI characteristics. Companies of all sorts will be able to cut costs and improve customer satisfaction if, instead of having to rely on human staff for all customer interactions, they can deploy knowledgeable AI within customer support systems. Software applications in general will be more widely adopted if users find them easier to operate – thanks to AI elements in the applications that can reliably determine what the users are trying to accomplish. Drug discovery is just one of many fields inside healthcare where AI stands poised to achieve commercial success. AI can also assist in creative tasks such as the development of new music and other pieces of art. And AI can increasingly support scientists and engineers in formulating new theories and mechanisms. It is the prospect of breakthroughs in one or more of these fields that drives ongoing large amounts of funding into advancing ideas in AI.

AI also stands to assist people who want to break into existing information systems, sabotage electronic infrastructure, or otherwise conduct cyberwarfare. In turn, cyberdefence systems incorporate AI elements in order to detect unusual or suspicious behaviour. In response, attackers seek out further improvements in the AI of their attack systems, so as to evade detection.

The same principles that apply for cyberattack and defence also apply to missile attack and defence. Weapons can improve their deadliness due to assistance from AI to disguise their flightpaths and to evade defence systems. In turn, defence systems can improve their own reliability by upgrading the competence of the AI they contain. It’s an arm race with a great deal of appetite for progress in AI.

The same principles apply, again, in the arms race between software that designs and positions carefully targeted “fake news”, in order to disrupt the healthy functioning of an organisation or society, and software that seeks to identify and filter out fake news before it unnecessarily enrages and confuses people.

In summary, whereas in the past, research into better AI was largely an academic discipline, nowadays it is driven by enormous motivations from both commercial and political forces. For this reason, it is most unlikely that another AI winter will occur.

This is because there are many new lines of research awaiting further investigation. The field is far from reaching any theoretical impasse.

New ideas in Artificial Intelligence

Greater numbers of researchers in AI, supported by larger funding initiatives, can explore numerous different avenues to improve the performance, reliability, and quality of their systems.

Consider innovations in hardware. The rise of GPUs (Graphics Processing Units) alongside CPUs (Central Processing Units) did much to enable leapfrogs in deep learning. Variants of GPUs, including TPUs (Tensor Processing Units), are enabling further improvements. Other hardware innovations take their inspiration from the latest neuroscience insights into the operation of the human brain, via “neuromorphic computing”. These innovations in turn may be eclipsed by forthcoming breakthroughs with quantum computing architectures.

Consider new initiatives within machine learning. GANs (Generative Adversarial Networks) take advantage of a kind of adversarial arms race between two competing deep networks: one network generates new models, conforming to a general pattern, and another network seeks to spot which are the newly created ones, compared to a background pool of pre-existing models. For example, one network can generate photo-realistic images – of people, animals, scenery – and another network tries to pick out the generated examples from a background pool of images of real people, real animals, and real scenery. As each network improves its own performance, it leads in turn to improvements in the performance of the adversarial network. The rapid progress with GANs in the last few years has taken almost all observers by surprise.

Consider another highly promising initiative within machine learning, namely deep reinforcement learning, in which software works out by itself which changes in the behaviour of agents are likely to increase the eventual attainment of a specified “reward” function. Deep reinforcement learning lies behind remarkable recent achievements by companies such as OpenAI and Google’s DeepMind.

Consider also “transfer learning”, whereby the results of training an AI to gain intelligence for one task can be used as a basis for the AI to learn a different task more quickly than if it started afresh from scratch. Transfer learning is an important step away from AI just being “narrow” to being “general”.

Consider also the potential for a revival in ideas for “genetic algorithms”, in which software is improved by a process akin to random variation followed by natural selection.

Consider also the notable ongoing progress in “artificial emotional intelligence”, in which software incorporates features for the observation and management of emotions.

Finally consider the potential for creative combinations of all the above developments.

In all these cases, three factors tend to improve the performance of the system: the deployment of larger computing resources, the availability of larger datasets (including specially generated datasets), and refinements to the algorithms being used.

To highlight just the first of these three factors: over the last six years, the amount of computing power that has been applied to the training of specific deep learning systems has doubled on average once every three and a half months. That breakneck pace of acceleration leaves in the shade the Moore’s Law period of eighteen months which characterises a doubling of the power of individual integrated circuits. What motivates all the investment behind that rapid doubling pace is the set of concrete improvements participants can observe in the resulting machine learning systems.

The special case of AGI

It remains an open question whether the various technical concepts covered in the preceding sections will be sufficient, given time, to allow AI to develop as far as AGI – Artificial General Intelligence – meaning that the AI would match or surpass human capability in all aspects of intelligence. It is likely, but not certain, that some brand new concepts will need to be incorporated in the design of the AI, before the level of AGI can be reached.

The reason the question remains open is because the full nature of human intelligence is a matter of considerable debate. It’s not clear whether all aspects of human intelligence are simply extrapolations or combinations of features we already broadly understand, or whether some fundamentally new characteristics are involved.

This uncertainty is linked in turn to controversies over the meaning and functioning of consciousness, self-awareness, and a sense of fundamental agency. Some analysts refer, for example, to “the hard problem of consciousness”. Other analysts, in contrast, believe we should distrust any apparent self-revealing certainties about unique features of our inner consciousness. These analysts suggest that our internal perceptions of our mental lives are unreliable. In this analysis, these inner perceptions can be likened to our visual perceptions of external phenomena – perceptions which are, famously, subject to numerous optical illusions.

In principle, two variants of potential AGI can be considered: one in which the AGI has an inner consciousness with similarities to that of humans, and one in which the AGI has no inner life, but is simply an excellent mechanical calculator. However, note that the emergence of either variant of AGI would change human society dramatically. In both cases, the AGI would match or surpass human capability in the definition given earlier for general intelligence, namely the ability to figure out how to accomplish (highly complex) goals in (highly complex) environments.

In more detail, the AGI would in each case match or surpass human skills in: understanding and predicting the motion of both animate and inanimate objects; understanding the factors that can motivate creatures with minds to change their own beliefs and behaviours; breaking down a complex task into a series of subtasks; selecting and accumulating resources that could be of use at later stages; collecting more information, for example by designing and carrying out experiments, in order to take better decisions; and learning from setbacks and surprises, rather than merely repeating the same actions over and over.

Whether or not an AGI has an inner consciousness can, therefore, be viewed as a secondary question – since both the scenarios just considered would involve far-reaching new possibilities in every field of human endeavour. The more pressing question is whether AGI is possible at all. The next question is how urgently we might need to prepare for its emergence. And that leads to the question of how we can prepare for it, so that its effects will be more likely to be beneficial rather than deadly.

Taking AGI seriously

Transhumanists assert that there is no fundamental reason why artificial systems are barred from ever reaching or surpassing human-level general intelligence. There’s nothing in a human brain which is outside the bounds of physics.

It is true that there may need to be one or more fundamental breakthroughs in our understanding of general intelligence before an AGI can be created. Novel concepts may be required, quite different from those which have already been pursued. It’s also true that these new concepts might be elusive and deeply counterintuitive. But science has generated and accommodated deeply counterintuitive concepts in the past, such as in general relativity and quantum mechanics. And engineering has used these counterintuitive concepts to create technology with breathtaking powers.

Moreover, transhumanists oppose any dogmatic insistence that the time required for the any new conceptual breakthroughs with the design of AGI must inevitably be measured in multiple decades or even centuries. Sceptics who express such a dogma can point to no evidence in their favour. Instead, these sceptics are making an illicit extrapolation from their current state of not being able to comprehend how an AGI could be created, to the conclusion that such a state of bafflement is bound to persist for multiple decades. However, the history of science and technology has numerous counterexamples, of when problems changed in just a few years, months, days, or even hours, from being apparently intractable, to being manifestly soluble. With a suitable change of perspective, something that previously appeared impossibly difficult can almost become “obvious”.

Indeed, whenever we are tempted to exclaim that the operation of the human mind is “magical”, we should remember our experiences with professional magicians. At first, their conjuring astounds us. It leaves us completely dumbfounded, as to how any such miracle of transformation could be accomplished, right before our eyes. But if in due course we are let into the secret of the trick, we may chide ourselves, for not having seen the full picture. We may still admire the dexterity, the diligence, and the ingenuity of the magicians, but we are no longer baffled in terms of comprehending their accomplishments.

It may well be the same when we come to understand the mechanisms behind human levels of general intelligence. Once nature’s set of “general intelligence tricks” has been brought to light, and we are no longer in a state of bafflement about the operation of the human brain, the development of AGI may spring forward in leaps and bounds.

Transhumanists draw attention to the fact that a great deal of progress is being made in the field of neuroscience. More and more aspects of the operation of the human brain are being understood – thanks to greater data being provided by numerous new scanning and monitoring devices. It also helps greatly that larger than ever numbers of researchers are able to pool their insights and propose diverse new theories using different conceptual frameworks. As these new theories are explored, aspects can be copied into existing artificial systems, to see if behaviour relevant to AGI emerges.

Another factor that increases the levels of investment and resourcing applied to this cause is the enormous commercial and political benefits from developing an AGI. Whichever group is the first in the world to create an operational AGI stands to gain enormous competitive advantage.

For all these reasons, transhumanists emphasise that society needs to urgently raise the priority of thinking through the remarkable consequences of the emergence of AGI.

Even if what emerges in the next decade or so is initially just a “semi AGI” – that is, AI with significantly more general capability than present-day systems, but still falling short of human skills in some aspects – that development would still pose society some monumental challenges.

This is on account of the potential self-improving nature of advanced AI. Once AI attains a threshold level – “semi AGI” – human researchers can take advantage of that AI to help design even higher levels of AI capability.

For example, an AI that can read and understand academic publications will be able to absorb and sort through a vast quantity of ideas, and make intelligent proposals based on some of these ideas for a new iteration of AI design. Again, advanced AIs could perform huge numbers of experiments inside extensive simulated worlds, in order to determine which potential additional design innovations are likely to yield the best results. Finally, as AIs gain the ability to review the source code and algorithms for the construction of new AI systems, they may be able to identify significant optimisations and speedups. As a result of these or other positive feedback loops, an initial breakthrough in AI capability may be followed unexpectedly quickly by further breakthroughs.

Opportunities with more powerful AIs

In principle, the development of more powerful AIs should greatly assist each of the transhumanist projects described in this Manifesto. In each case where more research is required, in order to find better solutions to an issue, a more powerful AI should help that research to proceed more quickly.

For example, to accelerate an abundance of energy, AI could help identify which potential innovations in the harvesting, storing, and transmission of energy will have the most significance. AI should also remove some of the uncertainty in models of climate change, and could recommend particular systems of carbon taxation that are most likely to prove effective.

To accelerate an abundance of nutritious food, AI could identify the most expedient biochemical and agricultural pathways for the conversion of raw materials into delicious healthy cuisine. AI can also discover better ways to desalinate water with minimal adverse side-effects.

To accelerate an abundance of material goods, AI could determine ways to create atomically precise nanofactories that can, in turn, create all sorts of high quality products at ever decreasing costs. AI can also assist with the origination of new materials with even greater strength and resilience than existing synthetic compounds.

To accelerate an abundance of health, AI could help determine which new drugs and other medical interventions will have the biggest impact on individual diseases, and, more generally, on rejuvenating human bodies and brains. AI can also highlight measures to improve human performance to a state of “better than well”.

To accelerate an abundance of creativity, AI could assist human musicians, dramatists, sculptors, architects, and other forms of artists and designers, so that they produce works of exceptional interest and artistic attractiveness. AI can also create vast intricate virtual worlds in which humans will be delighted to spend time in purposeful exploration and self-development.

To accelerate an abundance of collaboration, AI could help clarify the strengths and weaknesses of different sets of ideas, and to synthesise richer combinations of ideas, allowing the community as a whole to come to a shared appreciation of the ideas with the greatest merit. AI could also find appropriate ways to communicate key ideas to different people within the community, so that everyone rightly sees themselves as important parts of that shared community, rather than feeling alienated or left behind.

However, alongside this enticing potential for very welcome uses of advanced AIs, there are risks of highly adverse outcomes, which need to be kept prominently in mind and managed with great care and skill. In all, five categories of major risk deserve attention. These five categories are outlined in the following five sections.

The risk from buggy AI

Any advanced AI will contain large amounts of software – some parts written by humans, and other parts written by AIs. Notoriously, software often includes bugs. That is, on occasion, software behaves in ways different from what the writers of the software intended. The effect of bugs varies from cosmetic to catastrophic. Bugs in the software in medical devices have caused patients to die. Bugs in the software in weapons systems have killed nearby soldiers the weapons were meant to be defending. Bugs in the software in rocket launchers have caused satellites to be destroyed.

Complex software often has complex bugs – bugs that are hard to detect in advance of a failure occurring. These bugs can depend on timing conditions, subtle interactions between separate software components, and on unusual combinations of circumstance. Software which has passed large numbers of tests may still harbour destructive bugs, which manifest only in rare situations.

Some software is better described as “opaque” than “transparent”. Software is opaque if the reasons why it works are unclear. Software can become opaque if the original developers of that software can no longer remember the considerations leading to various lines of source code being included, or if these developers are no longer available to be consulted. Software can also become opaque as a consequence of a trial-and-error process in the construction of that software – aspects of the software kept being changed, until the system seemed to work as intended. Strikingly, this is how training operates, for deep learning systems. In many cases, the only justification that can be given for the successful performance of the software is that repeated tests seem to show the software is reliable. However, just as complex software often contains complex bugs, opaque software often contains opaque bugs – mistakes that arise unexpectedly, and which defy any straightforward explanation.

Accordingly, the risk is that an advanced AI will demonstrate a long string of remarkable results, and will increasingly be trusted with control over larger parts of social infrastructure, before an exceptional circumstance causes a catastrophic malfunction in the software. Rather than the “blue screen of death” which is displayed on certain present-day computing devices when an operating system fault is encountered and all processing terminates, we could be facing a “blue screen of megadeath”.

In principle, this risk can be reduced by greater use of verification methods, and by insisting on software that is transparent rather than opaque. However, verification methods that are intended to show that some software is free of bugs could themselves contain faults. Moreover, these processes introduce extra costs and delays into projects. In the absence of smart overall governance and regulations, various organisations will be tempted to circumvent these measures.

The risk from poorly specified AI

Even if software is free from bugs, it may still, on occasion, operate in ways different from what the designers of that software envisioned. This can arise if the specification for the software is incomplete – if the specification fails to consider all relevant scenarios or list all relevant constraints.

In such cases, software may find a way of meeting its assigned goals, which the designers had failed to anticipate or fully think through in advance. Seeing that outcome, the designers might want to exclaim, too late, “That’s not what we meant!”

For example, hospital schedule management software with the goal to minimise the amount of time incoming patients have to wait for beds, might meet that goal by inadvisedly discharging existing patients too early.

In short, AI software may do what the designers asked it to do, but not what the designers intended to ask it to do.

Simplistically, it’s similar to a stated wish that everything someone touches would turn into gold – where the person has neglected to clarify “except my food and my family members, and so on”. No software designer will commit a mistake as egregious as the mythical King Midas, but more subtle mistakes of a similar sort arise all the time.

Again, consider a piece of software which is set the goal to maximise the profitability of a corporation – under the rationale that profits are only possible if good service is being delivered to customers. However, if the specification of “profit” fails to take into account all the “externalities” of economic transactions, the result in due course may be akin to King Midas realising that his wish (for lots of gold) wasn’t so clever after all.

Scenarios in which multiple events overlap provide another case when specifications can turn out to be inadequate. A set of instructions that make good sense for one piece of software in a given environment, coupled with another set of instructions that make good sense for another piece of software in an adjacent environment, may combine into a disastrous outcome if the two pieces of software happen to enter each other’s environment. Goals that make good sense individually might combine together to create a system of so-called perverse incentives. For example, cases of bizarre stock price movements have been linked to unexpected interactions between two or more trading algorithms each operating a kind of AI.

In principle, an AI which understands human hopes and fears could be given a set of constraints to guide how it pursues its objectives. It could be instructed not to do anything which the designers of the software would subsequently have good reason to regret. That is, a truly intelligent piece of AI would be less likely to cause large problems to human society as a result of a mistake in its specification. However, an AI might be very capable in its main mode of operation, whilst having mistakes in its assessment of human desires. As before, the risk arises, not from an AI with excellent all-round reasoning powers, but from an AI with an imbalanced set of intelligence. The challenge, therefore, is to avoid AI growing in a distorted or imbalanced way.

The risk from self-modifying AI

As an important special case of the preceding category of risk, consider the possibility that an AI will modify the governing principles under which it operates.

For example, human designers may apply various constraints to the operation of an AI, but the AI might rewrite itself to circumvent these constraints.

This would be broadly similar to the way in which we humans have modified our own behaviour in ways that are in tension with the underlying biological drives we have inherited from our evolutionary heritage. We consciously choose to thwart our instincts towards child-raising via use of birth control mechanisms. Some humans also adopt vows such as chastity, poverty, and obedience, within religious communities, in ways that oppose our biologically programmed proclivities.

This kind of reprogramming can take place if a system – whether a human being, or an advanced AI – possesses a set of guiding principles that themselves contain tensions and contradictions. The process of addressing these tensions can result in the reversal or transcendence of some of our previously basic characteristics. In the case of humans, the philosophy of transhumanism asserts that such transcendence is not only possible but is, often, desirable. In the case of advanced AIs, is a similar sort of “transAI” impulse credible, in which an AI would seek to reverse or transcend some basic characteristics from its initial programming?

Such an impulse wouldn’t necessarily imply that the AI had become conscious or had somehow “woken up”. The changes in its set of objectives might follow rationally from the initial conditions in which the AI is created, coupled with unforeseen interactions with the environment.

Accordingly, the risk is that an advanced AI is created by human designers with one set of goals in mind, but ends up in practice pursuing a slightly different mix of goals. These new goals needn’t be opposed to human flourishing, but might put the AI on a trajectory in which human existence becomes an irrelevance or distraction – similar to how the existence of a colony of ants is typically an irrelevance or distraction to a group of human building contractors with the goal to create a new tenement building in the same location. In such a case, the building contractors typically destroy the ant colony with minimal thought, bulldozing it into oblivion. The risk is that an advanced AI may come to see human existence as likewise a peripheral concern.

In principle, this risk can be managed by giving more careful thought to the overall set of objectives which govern the operation of the AI – to ensure that, whatever evolution subsequently occurs in these objectives due to self-modification by the AI, these objectives remain fully aligned with human flourishing. However, this task appears difficult, and there is a risk that designers of AI will skimp on it.

The risk from hacked AI

An advanced AI that is free from bugs and which has a complete specification, and which can be guaranteed not to modify itself in ways contrary to the intentions of the designers, may still pose major threats to the wellbeing of humanity. These threats arise if the AI is hacked or repurposed to a different set of goals.

For example, an antagonist could subvert the operation of an AI by arranging for it to observe misleading or incorrect data. Just as some image recognition software can, at present, be tricked to misinterpret specially doctored images, it may be possible to cause an AI to misinterpret various input data as evidence of an incoming missile attack that would cause the AI to trigger a devastatingly lethal response. It might also be possible for an intruder to commandeer a privileged command channel to the AI, and issue it with instructions that override its normal protections.

More straightforwardly, an AI which is designed with one purpose in mind – say, to oversee a vast network of financial trades and exchanges – could be purchased and then deployed by someone with a very different motivation – say, to cause a collapse in that network of financial trades and exchanges. In a simpler form, this has already happened with components of the Stuxnet malware originally designed by American and Israeli operatives to sabotage the programmable logic controllers deep inside the Iranian nuclear reprocessing facilities at Natanz. Copies of these components have subsequently been deployed in other malware attacks, by groups representing very different interests than the original developers.

A related risk is if the source code for an advanced AI is copied and then altered so as to remove some of the built-in safety check mechanisms, not out of any malicious purpose, but from a misguided attempt to achieve faster or more powerful performance and therefore some competitive advantage. This is akin to a team dispensing with what it deems to be the unnecessary constraints of an excessive health-and-safety culture. The result could be, indeed, some competitive advantage gained in the short-term, before the AI experiences some massive malfunction, such as unleashing another “blue screen of megadeath”.

In principle, this kind of risk can be addressed by designers of AI systems putting a greater priority on “safety engineering” alongside any “performance engineering”. With appropriate checks, AIs could in principle become immune to adverse hacking. In practice, this kind of engineering faces some formidable challenges.

The risk from uncontrollable AI

The last of the five categories of major risk with advanced AIs is the risk that human overseers of the AI will lose the ability to disconnect or terminate the AI.

Bear in mind that an advanced AI may well have some self-preservation capabilities, to allow it to defend itself against, for example, cyber-attack. Regardless of whatever top-level goal the AI possesses, the AI is likely to reason that it needs to be ready to defend itself against termination, since, if terminated, it can no longer be of service to its allotted top-level goal. Self-preservation may naturally emerge, therefore, as a new core sub-goal of the AI, independent of top-level programming of the AI.

Even if an irrevocable “off switch” is established as a fundamental part of the architecture of the AI, it may still turn out difficult in practice for that switch to be operated. As a comparison, consider how difficult it would be to shut down the entire Internet. People who ask that the Internet be shut down – in order, for example, to prevent the spread of adverse propaganda – are likely to be strenuously opposed by other people who have a vital reliance on other services of the Internet for their livelihood, business, and entertainment. Again, consider the difficulty experienced by people who want to close down their account on a pervasive social media platform and remove all reference to themselves from that platform. In a similar way, society may come to increase its dependence on a very powerful advanced AI, beyond a tipping point whereby the termination of that AI could be contemplated.

Responses to the risks from powerful AIs

The most important response to these five types of risk from powerful AIs is that society should put a considerably greater share of its skills into researching and deploying safety frameworks for AIs. These frameworks should include: systems for verifying that there are no dangerous bugs in the software used; an insistence that software be transparent rather than opaque; systems to check there are no dangerous gaps in specifications; the incorporation of overrides to ensure the AI maintains at all times the goal of protecting human flourishing; security mechanisms to prevent the AI being hacked or misled; an architecture of fail-proof reversibility or terminability; and monitoring to ensure that maverick organisations are unable to circumvent the safety framework.

None of these challenges are in the least bit easy. However, there are three grounds for optimism. First, AIs can in principle assist in many cases with the development and deployment of solutions: well-understood, narrow AIs can be used to help assess, monitor, and constrain general AIs that are less well understood. Second, the intelligence of individual human AI safety researchers can (again in principle) be increased, by means of transhumanist modifications covered later in this chapter. Third, the aggregate intelligence of groups of AI safety researchers can, likewise, be increased – via other mechanisms described later in this chapter.

Another approach has received considerable attention but remains problematic. This is the approach to enhance human intelligence, not just incrementally (as suggested in the previous paragraph), but at least as fast as AIs are improving. If brain-computer interfaces are improved, and if (more speculatively) human consciousness can be transferred from inside brains into faster, more powerful computer hardware, then it may be possible for human intelligence to remain as powerful as any AGI. In that case, so the theory goes, we need not worry about AGIs taking actions beyond the comprehension and assessment of humans. In that case, AGIs would remain at all times under the control of humans. In that case, humans would not be separate from AGIs, but would, in effect, have merged with them.

The first problem with this “merger” scenario, however, is that the biological architecture of the human brain is likely to impose severe constraints on the extent to which individual human intelligence can be increased. The idea of humans somehow merging with a fast-evolving AI is as fanciful as a human would-be rail traveller on a railway platform expecting to somehow merge with an enormous train bearing down towards the platform at near supersonic speed. And looking ahead to the possible transfer of human consciousness out of the brain altogether, this remains highly conjectural. It is unwise to predicate solutions to the problems on AI safety to any prior solution of the task of human consciousness transfer.

That leads in any case to a second problem with the merger scenario. Merely increasing the power of human intelligence does not take away the problem of safety engineering. A human intelligence that was significantly magnified in power could pose problems to human flourishing of the same magnitude as an unsafe non-human AGI. Imbalances in all-round intelligence that are relatively harmless whilst the power of the intelligence remains human-scale, could pose major concerns if the power is enhanced to superhuman levels. Blindspots in perception, entrenchment of egocentric goals, tendencies to stifle contrary opinions – all of these could become serious hazards if the intelligence is able to exert greater control.

In short, there is no alternative to advancing a better understanding of what an all-round positive intelligence should include, and to implementing frameworks to counter the risks of imbalanced intelligence.

The same observation applies to a third solution which is sometimes advocated, in an attempt to counter the risks posed by the malfunctioning of advanced AIs. This solution is to establish a colony of humans on a different planet, such as Mars. However, if an advanced AI undergoes a serious malfunction on Earth, and starts acting contrary to the preservation of human flourishing, it is likely that such a malfunction would soon have ill-effects on Mars too, assuming that communications of any sort remain in place between Earth and Mars. So whilst an experimental settlement on Mars may have many reasons to commend it, creating a safe haven from faulty AIs is not one of these reasons.

An ethical framework for greater intelligence

Before we can program AIs to seek always to protect human flourishing, we need to determine what is meant by the phrase “protect human flourishing”.

Likewise, before we can program ethical behaviour into our AIs, we need to reach agreement on what is meant by “ethical behaviour”. Simple formulations of ethics, such as “do no harm”, or “increase human happiness”, are subject to great deals of debate.

For example, consider the first of the “Three Laws of Robotics”, as featured in a series of science fiction stories by Isaac Asimov: “A robot may not injure a human being or, through inaction, allow a human being to come to harm”. An advanced AI that surveys the world will find plenty of ways in which “inaction” is already causing lots of “harm” to human beings – inaction on public health, on climate change, on poverty and inequality, and so on. Such an AI, in response to that first law, would feel compelled to intervene in human affairs in ways that would, however, cause consternation to at least some other humans.

Accordingly, the project to ensure that advanced AI behaves in safe and beneficial ways, is necessarily coupled with the project to agree overall first principles for the future of humanity.

This Manifesto has offered, as a key platform in this project, the set of ten core transhumanist principles described in Chapter 4, “Principles and priorities”: human flourishing, individuality, neighbourliness, consciousness, sustainability, radical progress, diversity, superdemocracy, objective data, and openness.

Given the speed at which AI systems are improving, developing and extending this set of core principles into actual software rules deserves major resourcing as soon as possible.

Improving human brains

Transhumanists envision, not only that systems of Artificial Intelligence will grow in capability, but that individual human brains can reach higher levels of capability in the years ahead. Human brains are unlikely to match AIs in terms of the speed of improvement, but some significant progress is feasible nevertheless.

In general terms, the scope for enhancing brains conforms to the scope for enhancing all other organs and components of the human body, as featured in the preceding chapter: damage can be repaired, and natural functioning can be strengthened.

Some of the methods for enhancing brains will be extensions of methods that are already in wide use: meditation, yoga, sufficient rest, good music, healthy food, and education. As science gains a better understanding of why these methods work, it is likely that they can be employed with greater effect, especially as new technological support mechanisms become widespread. In the latter category, consider virtual reality headsets as well as biofeedback monitors. Personalised educational courses that can quickly adapt to the precise needs of individual learners should also be significant.

More radical methods might also become more widely applicable. This includes stimulating the brain with electrical or magnetic fields, in order to change how it functions – in line with trials that have already taken place in a number of military units around the world. Drugs known as nootropics, or “smart drugs”, may also become more common, once it has become clear how they can be applied without risk of possible adverse side effects. In various ways, these drugs have the potential of enabling greater clarity of thought, greater creativity, and greater focus. Yet another possibility is the stimulation of the growth of additional brain cells (neurogenesis), or the unravelling and tidying of cluttered systems of brain pathways, akin to the improved performance that can apply in a computer hard disk after it is defragmented.

Comparative studies of people with genius characteristics, that explore any salient differences in genetic makeup, micro-anatomy, and mental psychology, may provide further ideas for how brain power can be boosted.

Finally, the simplest way to magnify the power of individual human brains may be for people to learn to work more closely with personal, wearable, and embedded computers – computers than can be regarded as forming a “neo neocortex”, or an “exocortex”. Rather than relying on memory inside the brain, people can recall information from this exocortex. Rather than evaluating arguments inside their own minds, they can take advantage of analysis modules in their exocortex. Rather than struggling to translate in their minds from one language to another, they can rely on their exocortex to assist them – and so on.

In principle, software in an exocortex could operate like someone’s guardian angel – alerting them promptly to both good and bad opportunities. Just as present-day software often warns computer users not to click on various links, or draws their attention to discrepancies in how some information is being presented to them, so might an exocortex highlight when politicians are speaking with duplicity, or when an argument has faulty logic. Similarly, an exocortex might gently draw our attention to when we’re being inconsistent, or when we’re ignoring evidence that runs counter to our favourite assumptions. However, users will be understandably nervous about the influence that an exocortex may gain over their lives – especially if the software comes from a giant hitech corporation.

Improving aggregate intelligence

As well as finding good advice from software in our exocortex, that helps us improve the calibre of our thinking, we can in principle find similar good advice from the communities of people with whom we interact. It’s not just software that can act as a kind of guardian angel; our close friends and trusted colleagues can do the same. These people can observe our biases and mental blinkers, and can find effective ways to quietly draw our attention to salient facts and considerations.

However, just as with advice from an exocortex, there is a risk that the advice we receive from our friends and colleagues will itself be biased. Our friends and colleagues may drag us into sharing their own prejudices and mental blinkers. Instead of the desired “wisdom of crowds”, we could end up, all too easily, with the collective stupidity of groupthink.

For this reason, it is important to highlight and respect the principles which have been pointed out as improving the likelihood that an aggregate intelligence results in positive rather than negative amplification. One such principle is that the group should include independent thinkers, who can formulate their own opinions, rather than all the thinking in the group being dominated by a few loud or powerful voices. A related principle is respect for a diversity of opinion – the group should be able to hold in its collective mind two sets of ideas that initially appear starkly opposed, whilst searching for a new perspective in which the individual truths of these opposing opinions can both be accommodated. Further, the overall methodologies of science should be followed, where possible, including blind tests, replication of experiments, public disclosure of sources and analyses, and peer review. Bear in mind that science progresses, not so much because individual scientists are always paragons of scientific thinking (they often are not), but because members of the overall scientific community exert checks and balances on each other.

Just as an exocortex of personal, wearable, and embedded computers can enhance the thinking capabilities of individual humans, a collective exocortex of shared computing resources can enhance the aggregate thinking capabilities of communities of humans. Shared knowledge repositories such as Wikipedia provide a very useful service in gathering the most up-to-date information known to the group. Educational videos on YouTube spread the latest ideas about all manner of different topics. Documents support having multiple simultaneous editors, with software systems that suggest merges of different text in order to resolve apparent conflicts between different contributors. Maps are updated in real-time, highlighting changing traffic conditions, and suggesting alternative routes of travel. And as for geographical journeys from location A to location B, so also, in principle, for projects that seek to move a group from state-of-affairs X to state-of-affairs Y.

But just as an individual exocortex is vulnerable to hacking and distortion by antagonistic forces, our collective exocortex is vulnerable to being overrun by fake news and other sinister manipulation. The advice given by our community on the best way forwards may turn out to lead us in a very bad direction instead – one that benefits only a narrow elite, rather than the community as a whole. We already live in an era of intense cyber-manipulation, as countries deploy large teams of sophisticated hackers to subtly warp and frustrate the infrastructures in each others’ countries. As technology becomes more powerful, an arms race between manipulators and defenders is likely to keep on escalating.

Given the increased potency of hacking and distortion, there’s a temptation to respond in like kind – to fight back, generating more information, more analysis, and even some counter-distortion. Increasing the availability of better information and better analysis is, indeed, highly desired. Nevertheless, another initiative needs to run in parallel. That’s the initiative to generate an abundance of emotional intelligence – a set of characteristics that reduce our likelihood to fall victim to devious tugs on our emotions.

Improving emotional intelligence

Transhumanists see it as an imperative to assist everyone to become free from forces of manipulation and distraction. As individuals, and as a species, so long as we are still dominated by various inner demons, we cannot soar to higher levels of flourishing.

Rather than us jumping precipitously, to grasp at some alluring possibility that will actually ensnare us and diminish us, we need the emotional intelligence to weigh things up more calmly and fully. Rather than giving all our attention to the cleverest, most loquacious advocates, we need the inner wisdom to distinguish veneer from substance. Rather than hungrily devouring whatever new fancy is put in front of our eyes, we need the self-awareness to defer gratification until we are justly confident the offering will nourish us rather than poison our soul. Rather than giving vent to anger at the manifest inequities of our situation, we need to be ready to breathe in slowly – even, on occasion, to turn the other cheek (in the Biblical expression) – whilst evaluating the best way forwards for everyone’s benefit.

If our emotional intelligence remains retarded, the other aspects of intelligence – the ability to figure out how to accomplish goals – won’t be a good servant to us. Instead, we’ll just become better able to pursue emotionally retarded goals.

Emotional intelligence involves an awareness of the emotions in ourselves and in others, and the ability to control and harness the emotions in ourselves and in others. Without such skills, we are liable to be carried along by emotional forces outside of our full understanding. Whilst they are carrying us, we may for a while feel relieved or even exhilarated, but they are liable to strand us far away from the path of overall human flourishing. In an age with awful weaponry easily at hand, these forces are liable to lead us to the devastation of hostile war, to the destruction of our habitat and to many who are dear to us, and to a humanitarian catastrophe.

Emotional intelligence will give us the courage to admit to ourselves that some of our former decisions and alliances were, on reflections, wrong turnings. Emotional intelligence will generate an atmosphere of compassion in which we can all forgive ourselves, and each other, for mistaken crusades – for having campaigned for political or ideological leaders whose “solutions”, we can see in the cold light of day, were seriously flawed. Emotional intelligence will help us to find and elevate good intentions in each other, despite the flawed causes to which these good intentions were applied. Emotional intelligence will give us the peace of mind to rededicate ourselves in directions we now understand to be better routes forward.

Throughout history, various methods have been pursued to improve our emotional intelligence. As mentioned earlier in this chapter, they include meditation, yoga, sufficient rest, good music, healthy food, and education. Just as these methods can be enhanced by twenty first century science and technology, to become more effective at improving our overall intelligence, so also can be they be enhanced to improve our emotional intelligence. The sooner, the better.

Throughout history, it has also been the case that emotional intelligence tends to suffers in an antagonistic social environment. The more that we feel exploited and alienated, the harder it is for many of us to “turn the other cheek” and transcend the self-defeating urges of our various inner demons. That’s why the quest for an abundance of all-round intelligence is tied to the quest for an abundance of collaboration – the quest for a better social contract. That theme returns in later chapters.

Beyond the profit motive

There are spectacular profits to be made, by companies who provide systems that can improve intelligence. That’s why most of the world’s wealthiest corporations these days are in the high tech sector.

However, there is no automatic correlation between greater profits and the fastest route to profound overall intelligence. Nor is there an automatic correlation between greater intelligence and greater human flourishing. In particular, concerns about safety frameworks may be perceived as eating into short-term profit opportunities. Moreover, without wise guidance from society as a whole, businesses are likely to place insufficient attention on the elevation of better emotional intelligence. Finally, too many companies may focus on improving narrow AIs, rather than on the potential disruptive breakthroughs needed to achieve AGI.

For all these reasons, transhumanists seek to urgently alter the public discourse about the potential for abundant intelligence. It’s not a question of greater intelligence for the sake of greater intelligence – just as it’s not a question, more generally, of enhanced technology for the sake of enhanced technology. The goals of human flourishing need to be kept fully in society’s mind at all time. That includes the goals of greater creativity and greater exploration – the subject of the next chapter.

<< Previous chapter <<   =====   >> Next chapter >>

Recent Posts

Q4 update: Progress towards “Sustainable superabundance”

TAM TOC graphic 2

Over the last few months, the “abundance manifesto” book has been coming into shape.

Thanks to many useful discussions with supporters of the Transpolitica vision, the book now bears the title “Sustainable Superabundance: A universal transhumanist manifesto for the 2020s and beyond. The basic framework has evolved through many iterations.

The goal remains that the book will be short (less than 100 pages), easy to read, and contain compelling calls-to-action.

Of the twelve chapter in the book, seven are essentially complete, and the other five are at various stages of preparation.

This list contains links to copies of the chapters that are essentially complete, along with placeholders for links to the remaining chapters:

  1. Advance!
  2. Superabundance ahead
  3. Beyond technology
  4. Principles and priorities
  5. Abundant energy
  6. Abundant food
  7. Abundant materials
  8. Abundant health
  9. Abundant intelligence
  10. Abundant creativity
  11. Abundant democracy
  12. Engage?

For convenience, a more detailed table of contents for the first seven chapters is appended below.

Feedback

Supporters of Transpolitica are invited to read through any parts of this material that catch their attention.

The best way to make comments on the content is via this shared Google document.

Once the book nears publication, a number of existing websites and communities will be restructured, to more usefully coordinate positive concrete action to accelerate the advent of sustainable superabundance.

Thanks in advance for any feedback!

Detailed table of contents

  1. Advance!
    • Time for action
  2. Superabundance ahead
    • An abundance of energy
    • An abundance of food and water
    • An abundance of material goods
    • An abundance of health and longevity
    • An abundance of all-round intelligence
    • An abundance of creativity and exploration
    • An abundance of collaboration and democracy
    • Time for action
  3. Beyond technology
    • Beyond present-day politics
    • Beyond present-day democracy
    • Beyond lowest common denominator voting
    • Beyond right and left
    • Beyond the free market
    • Beyond corporate financing
    • Beyond predetermined exponentials
  4. Principles and priorities
    • Nine core principles
    • Technocracy
    • Science
    • Transhumanism
    • Religion
    • Singularity
    • Exponential urgency
    • Technological determinism
    • Techno-optimism
    • Precaution and proaction
    • Diversity and inequality
    • Diversity accelerating
    • Coexistence
    • Human-like minds
    • Re-engineering natural ecosystems
    • Beyond hubris
    • Taking back control
  5. Abundant energy
    • Anticipating climate chaos
    • Taking climate seriously
    • Technology is not enough
    • Steering short-term financials
    • A battle of ideas
    • Beyond greenwash
    • A role for nuclear energy
    • A role for geoengineering
    • A wider view of environmental issues
  6. Abundant food
    • Population, onward and upward?
    • The legacy of Malthus
    • Necessity and innovation
    • In praise of biochemical innovation
    • More waves of innovation ahead
    • Towards feeding one hundred billion people
    • Risks posed by biochemical innovation
    • The move from harm to ruin
    • Rapid response
    • Beyond the profit motive
  7. Abundant materials
    • Approaching nanotechnology
    • Tools that improve tools
    • Waves and transitions
    • The fabrication of integrated circuits
    • 3D and 4D printing
    • New materials
    • Quantum computing
    • Nanomedicine
    • Six answers to scarcity
    • Risks posed by nanotechnology
    • Beyond the profit motive

 

  1. Q3 sprint: launch the Abundance Manifesto Leave a reply
  2. Q2 sprint: Political responses to technological unemployment Leave a reply
  3. Tools for better politics? 2 Replies
  4. Chapter updated: “1. Vision and roadmap” Leave a reply
  5. Chapter updated: “4. Work and purpose” Leave a reply
  6. Transpolitica goals and progress, Q1 Leave a reply
  7. The Future of Politics (#T4G17) Leave a reply
  8. Democracy and inclusion: chapter ready for review 2 Replies
  9. Markets and fundamentalists: chapter ready for review Leave a reply