10. Towards abundant creativity

This page contains the opening portion of Chapter 10 from
Sustainable Superabundance: A universal transhumanist invitation

tam graphic 10

10. Towards abundant creativity

As discussed in the previous chapter, greater machine intelligence and task automation have the potential, not only for triumph, but also for disaster. Alongside their potential to positively accelerate human flourishing in multiple spheres, these systems also have the potential to malfunction, catastrophically. These systems might give rise to what has been called “killer robots” – automated agents that unexpectedly kill vast numbers of people.

A key complication with killer robots is that it may be difficult ahead of time to appreciate the full extent of the dangers they pose. There could be an initial period in which automated systems demonstrate apparently smart decisions and stunning improvements in operational effectiveness. These systems could be involved, for example, in creating remarkable new medical cures or novel mechanisms to extract greenhouse gases from the atmosphere. During this period, human observers would come to feel confident about the technology involved – and about increasing the resources at the disposal of automated agents. But this could be a prelude to these systems veering badly off course, in an adverse reaction to some unforeseen circumstances. Adverse outcomes could include an all-consuming escalation of fake news, a meltdown in our global electronics infrastructure, the inadvertent destabilisation of the entire planetary climate dynamics, or an accidental nuclear holocaust. The confidence that humans had developed in machine intelligence and pervasive automation would turn out to have been utterly misplaced.

This category of existential risk evidently needs far-sighted management, via, amongst other measures, the rapid development and wise enforcement of lean safety frameworks. But killer robots are by no means the only major concern raised by the growth of machine intelligence. We also need to give serious consideration to the possibility of “job killing robots” – automation that performs workforce tasks much better than humans, and deprives humans of employment.

Both sets of threat need to be assessed and managed in parallel. To add to the considerations of the preceding chapter, the present chapter looks at the threat posed to options for human employment by greater machine intelligence and more pervasive automation.

The threat from job killing robots may be viewed as less cataclysmic than the threat from killer robots. However, as this chapter highlights, the way society responds to huge numbers of people being deprived of employment could itself trigger a spiral into an increasingly tragic outcome. As such, there are no grounds for complacency. At the same time, there are grounds for real optimism too.

The opportunity for creativity

The threat to employment from automation has long been foretold. Up till now, these predictions seem to have been premature. However, the closer AI comes to AGI – the closer that artificial intelligence comes to possessing general capabilities in reasoning – the more credible these predictions become. The closer that AI comes to AGI, the bigger the ensuing social disruption.

How will humans cope, if their income from work is materially reduced, or perhaps disappears altogether? Should greater automation be feared, resisted, or slowed down?

To state the conclusion: rather than fearing this development, transhumanists look forward to the greater freedom that it can entail – greater opportunities for all-round human flourishing. Humans will no longer need to invest such large portions of their time in occupations that are back-breaking or soul-destroying. We’ll be able, instead, to participate in the creation and exploration of music, arts, sports, ecosystems, planets, and whole new universes. This will happen because the immense bounty from greater automation will contain plenty for everyone’s needs.

But before this abundance of creativity can be attained, some significant adjustments are needed in the human condition – changes in mindset, and changes in our collective social contract. These adjustments will be far from trivial. A great deal of inertia will need to be overcome, en route to realising the full benefits of improved automation.

<snip>

<< Previous chapter <<   =====   >> Next chapter >>

Recent Posts

A reliability index for politicians?

Reliability calcuator

Imagine there’s a reliability index (R) for what a politician says.

An R value of 100 would mean that a politician has an excellent track record: there is no evidence of them having said anything false.

An R value of 0 would mean that nothing they said can be trusted.

Imagine that R values are updated regularly, and are published in real-time by a process that is transparent, pulling together diverse sets of data from multiple spheres of discourse, using criteria agreed by people from all sides of politics.

Then, next time we hear a politician passing on some claim – some statistic about past spending, about economic performance, about homelessness, about their voting record, or about what they have previously said – we could use their current R value as a guide to whether to take the claim seriously.

Ideally, R values would also be calculated for political commentators too.

My view is that truth matters. A world where lies win, and where politicians are expected to bend the truth on regular occasions, is a world in which we are all worse off. Much worse off.

Far better is a world where politicians no longer manufacture or pass on claims, just because these claims cause consternation to their opponents, sow confusion, and distract attention. Far better if any time a politician did such a thing, their R value would visibly drop. Far better if politicians cared much more than at present about always telling the truth.

Some comparisons

R values would play roles broadly similar to what already happens with credit scores. If someone is known to be a bad credit risk, there should be more barriers for them to receive financial loans.

Another comparison is with the “page rank” idea at the heart of online searches. The pages that have incoming links from other pages that are already believed to be important, grow in importance in turn.

Consider also the Klout score, which is (sometimes) used as the measure of influence of social media users or brands.

Some questions

Evidently, many questions arise. Would a reliability index be possible? Is the reliability of a politician’s statements a single quantity, or should it vary from subject to subject? How should the influence of older statements decline over time? How could the index avoid being gamed? How should satire be accommodated?

Then there are questions not just over practicality but also over desirability. Will the reliability index result in better politics, or a worse politics? Would it impede honest conversation, or usher in new types of implicit censorship? Would the “cure” be worse than the “disease”?

Next steps

My view is that a good reliability index will be hard to achieve, but it’s by no means impossible. It will require clarity of thinking, an amalgamation of insights from multiple perspectives, and a great deal of focus and diligence. It will presumably need to evolve over time, from simpler beginnings into a more rounded calculation. That’s a project we should all be willing to get behind.

The reliability index will need to be created outside of any commercial framework. It deserves to be funded by public funds in a non-political way, akin to the operation of judges and juries. It will need to be resistant to howls of outrage from those politicians (and journalists) whose R values plummet on account of exposure of their untruths and distortions.

If done well, I believe the reliability index would soon have a positive impact upon political discourse. It will help ensure discussions are objective and open-minded, rather than being dominated by loud, powerful voices. It’s part of what I see as the better politics that is possible in the not-so-distant future.

There’s a lot more to say about the topic, but for now, I’ll finish with just one more question. Has such a proposal been pursued before?

  1. Technoprogressive Roadmap conf call Leave a reply
  2. Transpolitica and the TPUK Leave a reply
  3. There’s more to democracy than voting Leave a reply
  4. Superdemocracy: issues and opportunities Leave a reply
  5. New complete book awaiting reader reviews Leave a reply
  6. Q4 update: Progress towards “Sustainable superabundance” Leave a reply
  7. Q3 sprint: launch the Abundance Manifesto Leave a reply
  8. Q2 sprint: Political responses to technological unemployment Leave a reply
  9. Tools for better politics? 2 Replies