Solution characteristics

Desirable characteristics of technological solutions

Six of the Singularity Principles promote characteristics that are highly desirable in technological solutions:

  • Reject opacity
  • Promote resilience
  • Promote verifiability
  • Promote auditability
  • Clarify risks to users
  • Clarify trade-offs

Reject opacity

The principle of “Reject opacity” means to be wary of technological solutions whose methods of operation we don’t understand.

These solutions are called “opaque”, or “black box”, because we cannot see into their inner workings in a way that makes it clear how they are able to produce the results that they do.

This is in contrast to solutions that can be called transparent, where the inner workings can be inspected, and where we understand why these solutions are effective.

The principle also means that we should resist scaling up such a solution from an existing system, where any failures could be managed, into a new, larger, system where any such failures could be ruinous.

As it happens, many useful medicinal compounds have mechanisms that are, or were, poorly understood. One example is the drug aspirin, probably the most widely used medicine in the world after its introduction by the Bayer corporation in 1897. The mechanism of action of aspirin was not understood until 1971.

Wikipedia has a category called “Drugs with unknown mechanisms of action” that, as of mid 2022, has 71 pages. This includes the page on “General anaesthetics”.

Many artificial intelligence systems trained by deep neural networks have a similar status. It is evident that such an AI often produces good results, in environments that are well defined, but it’s not clear why it can produce these results. It’s also unclear when such an AI system can be misled by so-called adversarial input, or what are the limits of the environments in which that AI will continue to function well.

So long as the overall process is being monitored, and actions can be taken to address failures before these failures become ruinous or catastrophic, opaque systems might, for a while, be an “allowable weakness” with the useful positive side-effect of increasing human wellbeing.

But if there are risks of any failure escalating, beyond the ability of any intervention to fix in a timely manner, that’s when these opaque systems need to be rejected..

Instead, more work is needed to make these systems explainable – and to increase our certainty that the explanations provided accurately reflect what is actually happening inside the technology, rather than being a mere fabrication that is unreliable.

Promote resilience

The principle of “Promote resilience” means we should prioritise products and methods that make systems more robust against shocks and surprises.

If an error condition arises, or an extreme situation, a resilient system is one that will take actions to reverse, neutralise, or otherwise handle the error, rather than such an error tipping the system into an unstable or destructive state.

An early example of a resilient design was the so-called centrifugal governor, or flyball governor, which James Watt added to steam engines. When they rotated too quickly, the flyballs acted to open a valve to reduce the pressure again.

Another example is the failsafe mechanism in modern nuclear power generators, which forces a separation of nuclear material in any case of excess temperature, preventing the kind of meltdown which occasionally happened in nuclear power generators with earlier designs.

Following the Covid pandemic and the consequent challenges to supply lines that had been over-optimised for “just-in-time” delivery, there has been a welcome rediscovery of the importance of designing for resilience rather than simply for efficiency. These design principles need to be further elevated. Any plans for new technology should be suspended until a review from a resilience point of view has taken place.

Promote verifiability

The principle of “Promote verifiability” states that we should prioritise products and methods where it is possible to ascertain in advance that the system will behave as specified, without having bugs in it.

We should also prioritise products and methods where it is possible to ascertain in advance that there are no significant holes in the specification, such as failure to consider interactions with elements of the environment, or combination interactions.

In other words, we need increased confidence in each of two steps:

  1. The product is specified to behave in various ways, in order that particular agreed goals or requirements will be met in a wide variety of different circumstances (as previously discussed in the section on “Question desirability”)
  2. In turn, the product is designed and implemented, using various techniques and components, in order that it behaves in all cases in line with the specification.

The principle of “Promote verifiability” urges attention on ways to demonstrate the validity of both of these steps: that the specification meets the requirements, without having dangerous omissions or holes, and that the implementation meets the specification, without having dangerous defects or bugs.

These demonstrations must be more rigorous than someone saying, “well, it seems to work”. Different branches of engineering have their own sub-disciplines of verification. The associated methods deserve attention and improvement.

But note that this principle goes beyond saying “verify products before they are developed and deployed”. It says that products should be designed and developed using methods that support thorough and reliable verification.

Promote auditability

The principle of “Promote auditability” has a similar goal to “Promote verifiability”. Whereas “Promote verifiability” operates at a theoretical level, before the product is introduced, “Promote auditability” operates at a continuous and practical level, once the product has been deployed.

The principle urges that it must be possible to monitor the performance of the product in real-time, in such a way that alarms are raised promptly in case of any deviation from expected behaviour.

Systems that cannot be monitored should be rejected.

Systems that can be monitored but where the organisation that owns the system fails to carry out audits, or fails to investigate alarms promptly and objectively, should be subject to legal sanction, in line with the principle “Insist on accountability”.

Systems that cannot be audited, or where auditing is substandard, inevitably raise concerns. However, if auditability features are designed into the system in advance, at both the technical level and the social levels, this will help ensure that the technology boosts human flourishing, rather than behaving in abhorrent ways.

Note, again, that this principle goes beyond saying “audit products as they are used”. It says that products should be designed and developed using methods that support thorough and reliable audits.

Clarify risks to users

The principles that have been covered so far are, to be frank, challenging and difficult.

Compared to these ideals, any given real-world system is likely to fall short in a number of ways. That’s unfortunate, and steps should be taken as soon as possible to systematically reduce the shortfall. In the meantime, another principle comes into play. It’s the principle of being open to users and potential users of a piece of technology about any known risks or issues with that technology. It’s the principle of “Clarify risks to users”.

Here, the word “user” includes developers of larger systems that might include the original piece of technology in their own constructions.

The kinds of risks that should be clarified, before a user starts to operate with a piece of technology, include:

  • Any potential biases or other limitations in the data sets used to train these systems
  • Any latent weaknesses in the algorithms used (including any known potential for the system to reach unsound conclusions in particular circumstances)
  • Any potential security vulnerabilities, such as risks of the system being misled by adversarial data, or having its safety measures being edited out or otherwise circumvented

When this kind of information is shared, it lessens the chances of users of the technology being taken by surprise when it goes wrong in specific circumstances. It will also allow these users to provide necessary safeguards, or to consider alternative solutions instead.

Cases where suppliers of technology fail to clarify known risks are a serious matter, which are addressed by the principle “Insist on accountability”. But first, there’s one other type of clarification that needs to be made.

Clarify trade-offs

The principle of “Clarify trade-offs” recognises that designs typically involve compromises between different possible ideals. These ideals sometimes cannot all be achieved in a single piece of technology.

For example, different notions of fairness, or different notions of equality of opportunity, often pose contradictory requirements on an algorithm.

Rather than hiding that design choice, it should be drawn to the attention of users of the technology. These users will, in that case, be able to make better decisions about how to configure or adapt that technology into their own systems.

Another way to say this is that technology should, where appropriate, provide mechanisms rather than policies. The choice of policy can, in that case, be taken at a higher level.

Next, let’s review the principles that ensure that development takes place responsibly.

Recent Posts

RAFT 2035 – a new initiative for a new decade

The need for a better politics is more pressing than ever.

Since its formation, Transpolitica has run a number of different projects aimed at building momentum behind a technoprogressive vision for a better politics. For a new decade, it’s time to take a different approach, to build on previous initiatives.

The planned new vehicle has the name “RAFT 2035”.

RAFT is an acronym:

  • Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

RAFT is also a metaphor. Here’s a copy of the explanation:

When turbulent waters are bearing down fast, it’s very helpful to have a sturdy raft at hand.

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities – enormous opportunities and enormous risks:…

Rapid technological change tends to provoke a turbulent social reaction. Old certainties fade. New winners arrive on the scene, flaunting their power, and upturning previous networks of relationships. Within the general public, a sense of alienation and disruption mingles with a sense of profound possibility. Fear and hope jostle each other. Whilst some social metrics indicate major progress, others indicate major setbacks. The claim “You’ve never had it so good” coexists with the counterclaim “It’s going to be worse than ever”. To add to the bewilderment, there seems to be lots of evidence confirming both views.

The greater the pace of change, the more intense the dislocation. Due to the increased scale, speed, and global nature of the ongoing NBIC revolutions, the disruptions that followed in the wake of previous industrial revolutions – seismic though they were – are likely to be dwarfed in comparison to what lies ahead.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In short, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

The words just quoted come from the opening page of a short book that is envisioned to be published in January 2020. The chapters of this book are reworked versions of the scripts used in the recent “Technoprogressive roadmap” series of videos.

Over the next couple of weeks, all the chapters of this proposed book will be made available for review and comment:

  • As pages on the Transpolitica website, starting here
  • As shared Google documents, starting here, where comments and suggestions are welcome.

RAFT Cover 21

All being well, RAFT 2035 will also become a conference, held sometime around the middle of 2020.

You may note that, in that way that RAFT 2035 is presented to the world,

  • The word “transhumanist” has moved into the background – since that word tends to provoke many hostile reactions
  • The word “technoprogressive” also takes a backseat – since, again, that word has negative connotations in at least some circles.

If you like the basic idea of what’s being proposed, here’s how you can help:

  • Read some of the content that is already available, and provide comments
    • If you notice something that seems mistaken, or difficult to understand
    • If you think there is a gap that should be addressed
    • If you think there’s a better way to express something.

Thanks in anticipation!

  1. A reliability index for politicians? 2 Replies
  2. Technoprogressive Roadmap conf call Leave a reply
  3. Transpolitica and the TPUK Leave a reply
  4. There’s more to democracy than voting Leave a reply
  5. Superdemocracy: issues and opportunities Leave a reply
  6. New complete book awaiting reader reviews Leave a reply
  7. Q4 update: Progress towards “Sustainable superabundance” Leave a reply
  8. Q3 sprint: launch the Abundance Manifesto Leave a reply
  9. Q2 sprint: Political responses to technological unemployment Leave a reply