Desirable characteristics of technological solutions
Six of the Singularity Principles promote characteristics that are highly desirable in technological solutions:
- Reject opacity
- Promote resilience
- Promote verifiability
- Promote auditability
- Clarify risks to users
- Clarify trade-offs
Reject opacity
The principle of “Reject opacity” means to be wary of technological solutions whose methods of operation we don’t understand.
These solutions are called “opaque”, or “black box”, because we cannot see into their inner workings in a way that makes it clear how they are able to produce the results that they do.
This is in contrast to solutions that can be called transparent, where the inner workings can be inspected, and where we understand why these solutions are effective.
The principle also means that we should resist scaling up such a solution from an existing system, where any failures could be managed, into a new, larger, system where any such failures could be ruinous.
As it happens, many useful medicinal compounds have mechanisms that are, or were, poorly understood. One example is the drug aspirin, probably the most widely used medicine in the world after its introduction by the Bayer corporation in 1897. The mechanism of action of aspirin was not understood until 1971.
Wikipedia has a category called “Drugs with unknown mechanisms of action” that, as of mid 2022, has 71 pages. This includes the page on “General anaesthetics”.
Many artificial intelligence systems trained by deep neural networks have a similar status. It is evident that such an AI often produces good results, in environments that are well defined, but it’s not clear why it can produce these results. It’s also unclear when such an AI system can be misled by so-called adversarial input, or what are the limits of the environments in which that AI will continue to function well.
So long as the overall process is being monitored, and actions can be taken to address failures before these failures become ruinous or catastrophic, opaque systems might, for a while, be an “allowable weakness” with the useful positive side-effect of increasing human wellbeing.
But if there are risks of any failure escalating, beyond the ability of any intervention to fix in a timely manner, that’s when these opaque systems need to be rejected..
Instead, more work is needed to make these systems explainable – and to increase our certainty that the explanations provided accurately reflect what is actually happening inside the technology, rather than being a mere fabrication that is unreliable.
Promote resilience
The principle of “Promote resilience” means we should prioritise products and methods that make systems more robust against shocks and surprises.
If an error condition arises, or an extreme situation, a resilient system is one that will take actions to reverse, neutralise, or otherwise handle the error, rather than such an error tipping the system into an unstable or destructive state.
An early example of a resilient design was the so-called centrifugal governor, or flyball governor, which James Watt added to steam engines. When they rotated too quickly, the flyballs acted to open a valve to reduce the pressure again.
Another example is the failsafe mechanism in modern nuclear power generators, which forces a separation of nuclear material in any case of excess temperature, preventing the kind of meltdown which occasionally happened in nuclear power generators with earlier designs.
Following the Covid pandemic and the consequent challenges to supply lines that had been over-optimised for “just-in-time” delivery, there has been a welcome rediscovery of the importance of designing for resilience rather than simply for efficiency. These design principles need to be further elevated. Any plans for new technology should be suspended until a review from a resilience point of view has taken place.
Promote verifiability
The principle of “Promote verifiability” states that we should prioritise products and methods where it is possible to ascertain in advance that the system will behave as specified, without having bugs in it.
We should also prioritise products and methods where it is possible to ascertain in advance that there are no significant holes in the specification, such as failure to consider interactions with elements of the environment, or combination interactions.
In other words, we need increased confidence in each of two steps:
- The product is specified to behave in various ways, in order that particular agreed goals or requirements will be met in a wide variety of different circumstances (as previously discussed in the section on “Question desirability”)
- In turn, the product is designed and implemented, using various techniques and components, in order that it behaves in all cases in line with the specification.
The principle of “Promote verifiability” urges attention on ways to demonstrate the validity of both of these steps: that the specification meets the requirements, without having dangerous omissions or holes, and that the implementation meets the specification, without having dangerous defects or bugs.
These demonstrations must be more rigorous than someone saying, “well, it seems to work”. Different branches of engineering have their own sub-disciplines of verification. The associated methods deserve attention and improvement.
But note that this principle goes beyond saying “verify products before they are developed and deployed”. It says that products should be designed and developed using methods that support thorough and reliable verification.
Promote auditability
The principle of “Promote auditability” has a similar goal to “Promote verifiability”. Whereas “Promote verifiability” operates at a theoretical level, before the product is introduced, “Promote auditability” operates at a continuous and practical level, once the product has been deployed.
The principle urges that it must be possible to monitor the performance of the product in real-time, in such a way that alarms are raised promptly in case of any deviation from expected behaviour.
Systems that cannot be monitored should be rejected.
Systems that can be monitored but where the organisation that owns the system fails to carry out audits, or fails to investigate alarms promptly and objectively, should be subject to legal sanction, in line with the principle “Insist on accountability”.
Systems that cannot be audited, or where auditing is substandard, inevitably raise concerns. However, if auditability features are designed into the system in advance, at both the technical level and the social levels, this will help ensure that the technology boosts human flourishing, rather than behaving in abhorrent ways.
Note, again, that this principle goes beyond saying “audit products as they are used”. It says that products should be designed and developed using methods that support thorough and reliable audits.
Clarify risks to users
The principles that have been covered so far are, to be frank, challenging and difficult.
Compared to these ideals, any given real-world system is likely to fall short in a number of ways. That’s unfortunate, and steps should be taken as soon as possible to systematically reduce the shortfall. In the meantime, another principle comes into play. It’s the principle of being open to users and potential users of a piece of technology about any known risks or issues with that technology. It’s the principle of “Clarify risks to users”.
Here, the word “user” includes developers of larger systems that might include the original piece of technology in their own constructions.
The kinds of risks that should be clarified, before a user starts to operate with a piece of technology, include:
- Any potential biases or other limitations in the data sets used to train these systems
- Any latent weaknesses in the algorithms used (including any known potential for the system to reach unsound conclusions in particular circumstances)
- Any potential security vulnerabilities, such as risks of the system being misled by adversarial data, or having its safety measures being edited out or otherwise circumvented
When this kind of information is shared, it lessens the chances of users of the technology being taken by surprise when it goes wrong in specific circumstances. It will also allow these users to provide necessary safeguards, or to consider alternative solutions instead.
Cases where suppliers of technology fail to clarify known risks are a serious matter, which are addressed by the principle “Insist on accountability”. But first, there’s one other type of clarification that needs to be made.
Clarify trade-offs
The principle of “Clarify trade-offs” recognises that designs typically involve compromises between different possible ideals. These ideals sometimes cannot all be achieved in a single piece of technology.
For example, different notions of fairness, or different notions of equality of opportunity, often pose contradictory requirements on an algorithm.
Rather than hiding that design choice, it should be drawn to the attention of users of the technology. These users will, in that case, be able to make better decisions about how to configure or adapt that technology into their own systems.
Another way to say this is that technology should, where appropriate, provide mechanisms rather than policies. The choice of policy can, in that case, be taken at a higher level.
Next, let’s review the principles that ensure that development takes place responsibly.