This page contains Chapter 4 from
Sustainable Superabundance: A universal transhumanist manifesto for the 2020s and beyond
Note: The text of this chapter of the Manifesto is draft and is presently undergoing regular revision.
To make comments or suggest changes in the text, please use this shared document.
4. Principles and priorities
It’s time to explore more deeply the ideas underpinning the Transhumanist Manifesto.
This chapter sets out a number of fundamental principles, and reviews some high-level illustrations and implications. The seven chapters that follow apply and extend this set of ideas in each of seven dimensions of human life where sustainable superabundance can bring profound transformation.
Nine core principles
We humans cannot live by bread alone. Nor do we live to work. These factors – the nourishment we consume, and the work we undertake, are means to an end, but are not ends in themselves.
Society needs new top-level goals. Society should no longer prioritise above all else economics metrics such as the Gross Domestic Product or the Employment Rate.
Instead, here are a number of principles that merit being at the core of decision systems. To give them a name, they can be called “the nine core principles” or “the nine transhumanist principles”.
First, the prioritisation of human flourishing: prefer actions that lead to the increase of human flourishing. Flourishing involves happiness, but there is more to flourishing than happiness. Flourishing involves energy and nourishment, but there is more to flourishing than energy and nourishment. Flourishing likewise encompasses but extends beyond creativity, intelligence, health, collaboration, and awareness. Over time, our understanding of the conditions and possible expression of human flourishing will surely evolve and improve. That’s as it should be.
Second, the fundamental importance of human individuality: individual flourishing should not be sacrificed or subordinated to collectivist goals. Society should protect and elevate all members of society. Individuals should never become cannon-fodder in service of some tribal, national, ethnic, religious, or ideological quest.
Third, the principle of active neighbourliness: treat others in the way we would ourselves like to be treated, if we were in the same situation. Rather than keeping quiet about impending dangers about to befall someone, or opportunities they are about to miss, we should find the way to speak up, just as we would ourselves like to be alerted to these dangers or opportunities in an equivalent circumstance.
Fourth, the generalisation of the previous principles beyond present-day humans: prefer actions that lead to the increase of flourishing of consciousness. To the extent that animal or artificial minds possess core attributes of consciousness, these minds deserve at least some of the same care and support as human minds. This care includes possibilities for growth and development, and the reduction in needless suffering.
Fifth, the generalisation to longer timescales, thereby highlighting sustainability: avoid actions that reduce the possibilities for future flourishing. Our plans need to enable, not only flourishing today, but also flourishing tomorrow (and the days and years that follow).
Sixth, the recognition that the future can be radically different from the present: the present circumstances of humanity should by no means be regarded as the desirable pinnacle of evolution. A very much better future lies ahead of us, provided we recognise that possibility, and take appropriate actions.
These six principles, as stated, leave many questions unanswered. They define a broad envelope that can accommodate a multiplicity of different viewpoints. That diversity is, itself, something to cherish. Hence a seventh core principle: nurture and tolerate diverse opinions within the overall transhumanist framework.
Here’s an eighth core principle: where different viewpoints within the overall envelope clash in terms of action to be taken, it is up to the community as a whole to deliberate and reach agreement. This is where the practice of superdemocracy comes to the fore.
Finally, as a ninth core principle: in deliberations between conflicting insights, no book, thinker, or tradition should be given any absolute priority. Society needs to remain open to current favoured ideas and methods being superseded. Of course, respect can be shown to books, thinkers, or traditions with good track records as sources of insight. But that respect should be tempered with caution. Runs of success can come to an end – especially in new circumstances or new contexts.
In summary, the suggested core principles are: human flourishing, individuality, neighbourliness, consciousness, sustainability, radical progress, diversity, superdemocracy, and openness.
To illustrate the ninth core principle just mentioned, consider the notion of technocracy – respect for decisions by domain experts.
Other things being equal, it’s sensible to pay attention to viewpoints from reputed domain experts. For example, in a sailing boat blown into unfamiliar turbulent waters by a storm, the recommendations of seasoned navigators deserve more attention than the opinions of a first-time sailor. Expert doctors are more trustworthy on matters of an individual patient’s health than lifestyle advice found in mass distribution horoscope columns.
However, all viewpoints should be subject to query and analysis. Experts are often wrong.
Moreover, the fact the someone is an expert in one domain does not entail any special priority applies to their viewpoints in other domains. An expert sailing navigator gains no authority in a different field, such as medical treatments, just by virtue of their sailing expertise.
As it happens, decisions frequently involve the intersection of several different domains. A decision that appears sound from one perspective may be recognised as inadequate when other perspectives are introduced. Listening only to experts from the first perspective risks reaching a bad decision.
Even when someone is an undoubted technical expert in a given domain, it’s worth investing time and effort in explaining to the general public the reasoning behind their recommendations. Rather than being forced onto uncomprehending recipients, key decisions should be collaboratively understood.
Accordingly, the ideal of technocracy needs to be subordinated to the ideal of superdemocracy – the involvement of the entire community in the process to reach decisions.
As another important example of the ninth core principle proposed above, consider science.
Science should be heartily applauded – not only for its many tangible accomplishments, but also for the merits of the methods it follows.
Indeed, the multiple beneficial technologies that will enable sustainable superabundance have arisen from wide adoption of the principles of the scientific community. That is, theories have been subjected to experimental analysis, findings have been published openly, agreement has been deferred pending replication and peer review, and advocates of positions have explored in advance which results would count as refutations (falsifications) and hence pose a real challenge to their favoured hypotheses. These are operating principles well worth upholding.
However, let’s acknowledge there are questions which science, by itself, cannot answer. Although science can answer the question, what’s a good way to accomplish goal X, it may not be able to say whether goal X is itself desirable.
Let’s also recognise that scientific knowledge is provisional and incomplete. Advice on non-fat diets is one example where scientific orthodoxy has significantly changed. To be scientific is to acknowledge that scientific theories can change.
Again, methods that make sense in some fields of science – such as double blind trials, in which experimenters aren’t aware which members of a sample are receiving a given treatment, as opposed to a placebo – aren’t applicable in all other fields of science.
In short, whilst science is one of the key tools used by humanity to advance towards sustainable superabundance, science in itself is not the end goal. Nor is the scientific method the only tool in our toolkit.
Accordingly, while championing the scientific method, we should resist the siren pull of scientism (the exaggeration of the capabilities of particular scientific methods).
Whereas science can tell us what’s a good way to accomplish goal X, we need to look beyond science to decide which goal X is worthy of pursuit.
It is transhumanism that provides the answer, with its vision of the profound ongoing elevation of all-round human health, human wisdom, human wellbeing, and human freedom.
Transhumanism comprises a set of philosophies of life that (to refer to a 1990 definition by philosopher Max More) “seek the continuation and acceleration of the evolution of intelligent life beyond its currently human form and human limitations by means of science and technology, guided by life-promoting principles and values”.
In order for us to achieve the superabundance envisioned in this Manifesto, there will first have to be significant improvements to human nature. Superabundance will not feature present-day humans in a new environment. It will feature humans enhanced in fundamental ways, freed from core defects that have hitherto limited our accomplishments. It will feature humans on a transhumanist journey towards transcendent posthuman capabilities.
The word “transhumanism” sometimes provokes negative reactions. However, it is likely that the word will become increasingly mainstream.
One reason people have expressed doubts about transhumanism is because of the perceived similarities between transhumanism and religion.
Indeed, transhumanism can be seen as a kind of fulfilment of religion – in the sense of providing an overriding vision, credible in the modern day, that will inspire greater social harmony and positive community endeavour.
However, transhumanism is distinguished from traditional religion by having all its viewpoints open to questioning and updating. There are no inviolable canons of belief or “holy books” in transhumanism.
Throughout history, religion has operated in ways that are both positive and negative.
Religion has often oppressed people, conducted witch hunts and inquisitions, limited personal choices, forbade critical thinking, imposed outdated conceptual frameworks, and caused widespread psychological suffering.
Religions deserve to be opposed if they agitate against the best insights of scientific enquiry, such as evolution through natural selection, or the benefits of vaccinations in providing immunity to many diseases.
Religions should also be opposed if they champion rules or practices that are strongly anti-humanitarian – for example, if they forbid the education of girls, prevent divorce, promote female genital mutilation, or regard as a capital crime homosexuality or apostasy (someone becoming a non-believer).
Society should resist any attempts by religious groups to penalise “blasphemies” such as criticism of central members of a religious tradition. There is no right not to be offended. Discussion needs to remain open.
At the same time, religion has often provided people with an important sense of purpose and community. Religion has often encouraged people to behave in morally positive ways.
Transhumanists can find many points of mutual support with religious adherents who avoid the adverse tendencies mentioned above. For example, transhumanists share with many religious adherents the goal to uphold human dignity and human flourishing, and to act responsibility (as a “steward”) towards the environment.
Religious adherents who are motivated to transcend the limitations of human nature – such as aging and mortality, as well as the propensity to behave badly (“sin”) – can find inspiration in the transhumanist mission to apply science and technology to abolish aging and to enhance human nature. These adherents can therefore see transhumanism as at least part of the culmination of their own religious aspirations.
History has featured many phases, with significant jumps in human capabilities between phases. Major transitions have been given names such as the cognitive revolution, the agricultural revolution, and the industrial revolution. We are presently experiencing an information revolution.
Each time human capability grows, our potential increases to change the earth, in both positive and negative ways. Earlier human migrations resulted in large scale extinctions of other animal species. Human activities are now impacting the atmosphere as never before. Careless use of weapons of mass destruction could result in the end of human civilisation – perhaps even the end of all human life.
The accelerating pace of the information revolution has led to the suggestion of a forthcoming “Singularity” in which change happens more quickly than ever before. Rather than waves of new technologies taking decades or even centuries to reach mass adoption, forthcoming disruptions might drastically alter society’s processes in years, months, or even weeks, days, or hours. This pace, encouraged by fierce competitive pressures, and enabled by self-updating automated processes which bypass the operation of slow human review, may well result in changes that no-one has been able to properly anticipate and evaluate in advance.
The notion of Singularity alarms some critics, who feel uncomfortable with the resonance with religious notions such as “the end of days”, the apocalypse, and the messianic establishment of paradise on earth. Rather than being distracted by ideas with religious connotations – or ideas churned over by Hollywood blockbusters – it’s better, say these critics, to concentrate on shorter-term real-world issues.
Transhumanists respond that it is better, instead, to keep an open mind.
Indeed, if society concentrates just on the risks and opportunities of the present-day, it may miss the larger risks and opportunities that, timewise, are just around the corner. And we may miss the possibility of making sufficient preparations to steer the forthcoming Singularity so that it results in sustainable superabundance rather than a much bleaker outcome.
Moreover, there will be other benefits from humanity developing skills and processes to help us steer a particularly rapid technological transition like the Singularity. These skills will increase the likelihood of us being able to steer other technological transitions of the coming years and decades, that may be less intense than the Singularity, but still far more disruptive than previous changes.
While transhumanists highlight the possibilities of large changes being “at hand” and “soon”, we cannot forecast any precise timing. But there are credible future scenarios in which these changes take place by the middle of this century. Perhaps sooner. Certainly within the lifetimes of many people presently alive.
Factors that make it plausible that radical changes may take place so quickly are the acceleration of developments of technologies such as AI and regenerative medicine – developments powered in turn by activity worldwide of unprecedented numbers of scientists, engineers, designers, entrepreneurs, educators, and social activists.
This activity is further boosted by positive feedback cycles: tools that improve tools, computers that improve computers, software that improves software, and AI that improves AI. Again, better technology increases the power of educational systems (including YouTube videos), and better educational systems increase the throughput of capable engineers, designers, entrepreneurs, and systems integrators – who, collectively, can develop technological solutions of even greater utility. Again, better technology improves communications networks, allowing for a richer flow of ideas between technologists around the world, which in turn accelerates the creation and deployment of innovative products. The cycle continues.
Yet other factors causing increasingly higher levels of research and development than before are the enormous commercial and military advantages that can be gained by the groups that are the first to achieve key breakthroughs.
For as long as positive feedback systems remain in place, the result is exponential growth. In reality, many progress curves take a ‘S’ shape rather than an unending exponential curve: an initial phase with slow growth transitions into a phase with faster growth, but then is followed by another phase of slower progress, as the potential of a particular technological architecture is exhausted. However, if the underlying market conditions continue to favour improvements – for example, if there continue to be major commercial gains from hardware and software becoming more powerful – then a number of different ‘S’ curves can arise, one after another, each being based on a new architecture or paradigm. Thus in the world of computing hardware, the architecture of vacuum tubes was superseded in turn by architectures involving single integrated circuits, massively parallel integrated circuits, and cloud computing. The individual ‘S’ curves combine into overall exponential progress covering a longer period of time.
For as long as conditions remain in place that encourage ongoing exponential progress in technology, we need a sense of exponential urgency – an imperative to improve our ability to anticipate scenarios, ahead of our human support systems being overwhelmed by unforeseen consequences of technological change.
Talk of long-standing positive feedback cycles might suggest that the progress of technological development is somehow inevitable or predetermined. But that would be a big mistake.
Instead, the development and deployment of technology is significantly influenced by a wide range of non-technological factors, such as design, legislation, user expectation, public zeitgeist, and random unpredictable events. Accordingly, transhumanists should resist any ideology of technological determinism.
Principles such as “Moore’s Law” may describe general technological trends at a first level of approximation, but can mislead observers who fail to pay sufficient attention. Closer study shows that these principles are subject to variation in matters such as timing for performance to double (e.g. 12 months, 18 months, or 24 months), the meaning of what is delivered (e.g. number of transistors vs. computing power), and the market adoption of the underlying technological capabilities (e.g. CPUs being displaced for some tasks by GPUs and by cloud-based computing).
Instead of sharp predictions of singular dates when future events will take place, what is more credible is a probabilistic prediction covering a range of dates and outcomes. For example, instead of saying that artificial general intelligence will assuredly cause a Singularity to occur in the year 2045, a better prediction is to say there’s a 50% chance that such an outcome will happen by that date – and also a 10% chance that it could happen by (say) 2025, and a 90% chance by (say) 2085.
And instead of any predictions that events will somehow “inevitably” result in a victory for transhumanist forces, what is much more credible is a probabilistic prediction covering a range of different impacts on the future of human flourishing.
That’s why transhumanists repeatedly highlight the importance of human volition and human action. Transhumanists understand the need to go beyond cheering from the sidelines – the need to develop and apply policies that have a real impact on actual social change.
Optimism generally leads to more personal energy than pessimism, so the former is to be preferred to the latter on that score. However, any optimism that is naive about potential drawbacks can lead to greater problems.
Rather than the description “techno-optimism”, transhumanists would prefer to be known as exemplifying “techno-realism”.
Whilst transhumanists anticipate the possibility of many wonderful consequences of technology, they are also aware that there could be terrible consequences as well.
The optimism of the transhumanists is grounded in sober appreciation of real-world issues and challenges. Rather than ignoring these challenges, transhumanists will formulate and evaluate potential coping strategies – strategies to manage the risks involved. When a risk is judged too severe, transhumanists will take actions to avoid the risk altogether.
Precaution and proaction
In determining priorities, both the precautionary principle and the proactionary principle have their place.
The precautionary principle is appropriate when there are credible suggestions of huge negative consequences of some action. We need to beware unintended runaway consequences of well-meaning actions.
The proactionary principle points out, on the other hand, that abstaining from action can have huge negative consequences as well. To adapt traditional language, there are “sins of omission” as well as “sins of commission”. Rather than any blanket abstention from actions which have associated risks, it is often better to develop plans to manage these risks.
For example, it may be argued that nuclear energy has the potential to give rise to radioactive waste that could contaminate huge biological ecosystems. The precautionary principle urges in that case to shut down all nuclear power plants. But the proactionary principle observes that wider adoption of nuclear energy might make all the difference in ramping down the use of carbon-based fuels quickly enough to avoid runaway global warming. In that case, it’s worth putting the precautionary stance temporarily on hold, while methods are reviewed for responding promptly to any leakages of radioactive waste. Far better to calmly assess the various probability estimates involved in these discussions, than to give absolute precedence to either the precautionary or proactionary principle.
In many cases, a better principle than precaution is reversibility. Action that is risky should be undertaken in ways that allow reversal, in the event that matters develop badly.
A commitment to reversibility requires effective monitoring, and avoidance of any inertia that would overwhelm attempts to change course. It also requires the emotional intelligence that is willing to admit and experience failures, and to learn lessons from these failures. It’s an approach that requires a higher calibre of execution than would a reflex application of the precautionary principle.
This may seem like a tall order. Are society’s leaders really capable of operating with a higher calibre? This is by no means clear. In the interest of minimising risk, it may appear better to adopt policies that can be implemented more straightforwardly.
However, any strategy of attempting to always pick low-risk options is itself highly risky. Mediocre leaders will make mediocre choices. Major opportunities will be missed. Political systems that lack capability are more likely to be subverted by powerful vested interests.
Instead, the strategy that has the lowest risk overall is the transhumanist project to significantly improve the collective capability of society’s leaders. That project will be difficult, but can be achieved by drawing on the best that humanity has to offer.
Diversity and inequality
One source of greater overall capability is when society includes multiple diverse opinions and outlooks, from which new insights can be formulated and integrated.
This is in line with the core transhumanist principle that people’s differences should be valued. Transhumanists see no reason to enforce uniformity. On the contrary, transhumanists champion greater choice, not only over lifestyle and thinking approach, but also over bodily form (“morphological freedom”).
A matter of real concern, however, is if people are left behind against their will, in matters of opportunity, such as lacking access to resources needed for personal growth and development. That is no longer a diversity to be championed. That is an inequality to be addressed.
Another concern is when the rewards from some joint activity are systematically captured disproportionately by one of the parties in that activity, resulting in growing inequality of opportunity.
There are three reasons to work to reduce inequality of opportunity. First, each individual should recognise that their own circumstances may change, due to factors outside their own control, and they could cease to be a “winner” from current economic transactions and become instead one of the “losers”. Second, even those who are presently well off should fear social chaos arising from the disruptive activity of people who perceive themselves unfairly treated by society. Third, in line with our natural instincts that are in this case commendable, we humans are predisposed to prevent one another from needless suffering.
In short, the transhumanist commitment to greater human flourishing implies as a consequence a commitment to enable people to overcome circumstances of unequal opportunity. A society that disregards this commitment is a society that will grow weaker and less capable.
Transhumanists oppose practices such as racism or ultra-nationalism that view members of specific ethnic groups as intrinsically inferior. This condemnation is compatible with the recognition that some genetic endowments increase skills or abilities in given areas, such as endurance in long-distance running. These variations do not cause any change in the intrinsic value of the people involved.
But the application of transhumanist technologies in the years and decades ahead will increase the diversity of human attributes – for example, enabling even greater endurance in long-distance running, or better memory and mental processing of information. Might this growing transhumanist diversity rupture the wholeness of humanity? What will happen to democratic ideals such as “one person, one vote” when some people have enhanced various of their attributes tenfold, one hundredfold, or more? Might these variations lead to a fragmentation of humanity into multiple castes? Is the potential for such a rupturing a reason to ban technologies of human enhancement?
As before, these changes should cause no change in the intrinsic value of the people involved. However, the increased diversity will give rise to a need for overall governance mechanisms that are more complex than before.
Groups of people who share particular enhanced skills and modes of practice will, understandably, seek some autonomy over decisions within their groups, freed from requirements for democratic approval by people in the wider community that have little understanding or interest in these modes of practice. This is similar to the principle of technocratic decision-making: there are domains of specialist knowledge (for example, medicine) in which decisions are best taken by the relevant experts rather than by a vote that includes non-experts.
Nevertheless, domains often interact with each other. Where the activities of one group of people, with one set of enhancements, interact with the activities of other groups of people, a broader democratic agreement needs to be reached.
The design of the overall transhumanist society therefore needs to enable the prosperous coexistence of subgroups with significantly divergent skills and practices. This is an extension of present-day society, which already supports peaceful coexistence of subgroups with different interests and aspirations.
It is one thing to consider the coexistence of diversity within the overall transhumanist framework. What about coexistence with groups (or nations) that reject one or more of the proposed core transhumanist principles?
For example, some groups may reject the principle of superdemocracy, preferring a governance system with a dictator (presumed to be benevolent). Other groups may reject the idea of respecting non-human minds, and may treat primates, dolphins, and other intelligent species with greater brutality than transhumanists would expect. Yet other groups may put conformance to particular religious scriptures at the very centre of their decision-making processes. Groups may also reject the transhumanist project of working towards the transcendence of present-day limits on human nature – limits such as our tendency to become old and die, our tendency towards groupthink and the confirmation bias, our tendency towards the abuse of power, and so on.
The transhumanist answer is: tolerance within limits, coupled with ongoing respectful advocacy of the merits of the transhumanist worldview.
Where groups are carrying out practices that transhumanists judge as abhorrent – such as capital punishment for “apostates” who turn again the religious beliefs of their parents – these groups can expect sanctions and other restrictions on trade.
Transhumanists look forward to greater flourishing, not just of human minds, but of all human-like minds.
For example, in order to eliminate the need for the industrial-scale slaughter of farm animals and fish, the development of lab-grown meat should be accelerated.
It is likely that people will seek to “uplift” their pets, so their pets acquire greater health, longevity, intelligence, and wellbeing. The same principle applies as for humans, namely to avoid unbalanced development that actually leads to a reduction in flourishing.
To the extent that AIs acquire consciousness, they too deserve rights.
The consideration of non-human minds (such as uplifted animals and conscious AIs), further increases the levels of societal divergence and coexistence we need to anticipate. As such, the challenges of designing the overall society increase. However, we can also anticipate that the greater collective intelligence available will provide the capability to manage these challenges.
Re-engineering natural ecosystems
Nature “red in tooth and claw” involves horrendous amounts of suffering as animals hunt and devour each other.
Re-engineering natural ecosystems to avoid such suffering is one of the great causes to which we transhumanists can commit ourselves.
Talk of re-engineering natural ecosystems to avoid needless suffering – a project sometimes called “paradise engineering” – brings forth accusations of hubris. Transhumanists are reckless and naive, critics say. Transhumanists should “stop playing God”.
Transhumanists answer this last rebuke with the response “we’re not playing” – our intent is extremely serious.
It’s a commendable part of human nature to seek to do better than our human nature. Throughout history, this impulse has led to great advances in medicine, engineering, and the arts.
However, transhumanists acknowledge that there are risks of unintended consequences from the application of technology. Examples include drugs such as Thalidomide and Vioxx, and the environmental impacts of the pesticide DDT and, more recently, huge quantities of plastic. Other examples are the way in which widespread access to social media has fanned the spread of fake news and polarisation, and the unforeseen biases latent in some of the algorithms introduced into decision processes.
As noted earlier, there are also risks that the greater diversity of human and posthuman lifestyles will challenge the overall wellbeing of human society.
Transhumanists take the position that such risks should be identified, reviewed in advance, and managed wisely. No evidence has been presented that any such risks are incapable of solution. To the extent that risks seem particularly worrying – as with the proliferation of weapons of mass destruction – society can and should take stronger measures in response.
Taking back control
Are some aspects of technological development already beyond control?
For example, many observers are alarmed by the seemingly uncontrollable rise in greenhouse gases in the atmosphere, risking runaway global warming and greater instances of extreme weather. This would be an example of technological development – namely, the technology of extraction of fossil fuels – running contrary to sensible humanitarian control.
However, to anticipate the discussion from the next chapter, transhumanists point to the potential of next generation green energy to hasten the switch to non-carbon energy sources. The technology of CCS (Carbon Capture and Storage) can also be accelerated. In this way, the adverse effects of one generation of technology can be undone by the positive effects of a later generation.
To the response that dysfunctional economic and political systems are preventing sufficient social focus on accelerating the requisite technological transitions, transhumanists foresee transformations in the operation of economics and politics – transformations from systems operating dysfunctionally to a system of superdemocracy.
The effort required for these transformations to take place should not be underestimated. These transformations will be among the most difficult in the history of humanity. But these transformations can be guided by the greatest collective intelligence in the history of humanity, and empowered by the huge positive psychological energy awakened by the vision of sustainable superabundance.