Consider the potential spread of technology that could be turned into horrific “weapons of mass destruction” (WMDs).
The dangers from WMDs arise from four factors operating together:
- The scientific possibility for various technologies to be weaponised – creating biological pathogens, nerve gases, nuclear bombs, electromagnetic pulses, and so on
- The desire of a group of alienated individuals to use such a weapon – out of a sense of deep grievance against society, humanity, or their own existence
- The ability of such people to access the materials required to inflict the damage they desire to exert
- The plans of such people remaining “under the radar” – not being noticed by any intelligence services.
Unfortunately, it only requires a small number of people to be deeply disgruntled with life, and to desire to exert “revenge”, for the risk of acquisition and usage of WMDs to be alarming.
Moore’s Law of Mad Scientists
This issue was stated in a thought-provoking form by pioneering rationality advocate Eliezer Yudkowsky at the Stanford “Accelerating Change” conference in 2005:
Moore’s Law of Mad Scientists:
The minimum IQ required to destroy the world drops by one point every 18 months.
The reference, here, is to the original “Moore’s Law”, which can be stated loosely as “The hardware computing power of silicon chips doubles, for a given price point, every 18 months”. The “Mad Scientists” variant highlights the fact that advanced technologies are increasingly easy to find, configure, and deploy.
As Oxford philosopher Nick Bostrom has pointed out, in his 2019 article “The Vulnerable World Hypothesis”, it was fortunate that nuclear weapons require considerable sophistication to produce:
Making an atomic weapon requires several kilograms of plutonium or highly enriched uranium, both of which are very difficult and expensive to produce. However, suppose it had turned out otherwise: that there had been some really easy way to unleash the energy of the atom – say, by sending an electric current through a metal object placed between two sheets of glass.
If it had turned out to be easier to create atomic weapons, it would very likely have increased the chances of them being used more frequently. Fringe groups could have obtained these weapons and detonated them for their own convoluted purposes.
Thankfully, humanity escaped that particular bullet, so to speak. But as science and technology progress, new options for easy access to vastly destructive forces remain a matter of grave concern. New WMDs might not require any fundamental breakthrough; instead, the mere convergence of existing technologies, combined in an unexpected way, could be all that it takes to place terrible explosives within the grasp of people with the adverse psychological disposition to wield them in anger.
Four projects to reduce the dangers of WMDs
In principle, reducing the likelihood of wide use of WMDs involves four projects:
- Making it clear to as many people as possible that a bright future awaits them, in a future sustainable superabundance, in which case their motivation should diminish to initiate mass destruction
- Keeping track of the set of possible WMDs which recalcitrant members of society might be able to utilise – noting that this set will grow in size over time, as science and technology evolve
- Developing counter measures to deploy against potential WMDs
- Monitoring for signs of any plans to use WMDs.
The last of these projects is, doubtless, controversial. It involves part of society surveilling citizens in ways that could be considered deeply intrusive. Such surveillance could enable state authorities to place undue constraints on members of the public. In other words, if not overseen carefully, legitimate surveillance could be accompanied by illegitimate surveillance.
The answer is to develop systems of “trustable monitoring”.
To be clear, the goal of a system of trustable monitoring goes further than detecting the actions of people who are intentionally planning some kind of mass destruction. The system would also have great value in detecting the actions of people who are risking mass destruction, without intentionally planning it. These are people who violate the safety framework contained in the Singularity Principles.
It’s similar to an individual taking a decision to drive a motor vehicle whilst having more than the legally allowed levels of alcohol in their blood. Such drivers may assert their own individual freedom, to drive whenever they wish. But society generally deplores such actions – following activism by groups such as Mothers Against Drunk Driving. Drunk driving risks the death, not only of the driver, but also of third parties who are involved in accidents caused by the person’s state of inebriation. Freedoms of action by drivers do not extend to freedoms to drive when in a dangerous condition.
Moreover, society generally approves the role of law enforcement officers in noticing drivers who appear to be drunk, and in using a breathalyser test kit in such circumstances.
A critic might complain that the privacy of an individual driver is violated by the actions of police in stopping their car when it is being driven erratically, requiring them to breathe into some test kit, and then storing the evidence of the result of that test. After all, the driver might not want other people to find out that they were driving in a particular location at a particular time – especially if they were meant to be elsewhere at that time. Nevertheless, they have no absolute right to privacy when they are violating agreed safety rules.
In the same way, organisations that take maverick actions when creating new technology – actions that pose significant risks to the wellbeing of third parties – lose the right to complain about any records made regarding these actions.
Examples of trustable monitoring
The system of trustable monitoring I have in mind will be somewhat similar to the way in which society trusts medical doctors with sensitive medical information. Doctors found to have misused such information – or to have been careless with it – are subject to fines, loss of privileges, or other penalties. Likewise, in a corporation, members of the IT department have privileged access to some of the company’s key information stores, and are, again, required to make sparing use of such powers. Again, all commercial aircraft carry cockpit voice recorders, with the recordings being accessed only in case of accident. As one final example, lawyers often see information that is kept out of the public eye.
To avoid abuse, trustable monitoring requires an agreed separation of powers. Each of us might individually dislike the idea that our actions could be monitored, on the off-chance that we are preparing to acquire and deploy WMDs. However, that’s a freedom we may agree to give up, as part of a collective social agreement, in order to reduce the chance that WMDs are acquired and deployed anywhere within our society. The degree to which we are comfortable to accept such a trade-off will depend on the degree to which we can trust the part of society that is doing the monitoring. In turn, that depends on the overall quality of the political infrastructure within our society.
To be clear, there’s a significant difference between the kind of trustable monitoring I’m proposing, and the earlier examples of restricted access to private data by doctors, IT technicians, aircraft crash analysts, and lawyers. In these earlier examples, the surveillance has the approval of the people being surveilled. Patients might not want their medical records released to friends or work colleagues, but they acknowledge that their doctors should have sight of that information. However, a would-be mass bomber will take steps to prevent anyone from finding out about their plans. Accordingly, to be effective, the monitoring system will presumably need to keep secret some aspects of its methods of gathering information. Critics who have a deep distrust of public officials will be alarmed at any such measures of secrecy. They’ll wonder: what else is being concealed?
Watching the watchers
The above considerations show why a separation of powers is so important.
Any group of “watchers” – units of the state that keep a look out for activities potentially involving WMDs – need to be overseen by a group of “watchers of the watchers” – a separate unit that oversees the operation of the surveillers. Next, the watchers of the watchers will themselves be subject to democratic supervision. Moreover, the entire process is subject to criticism from journalists, analysis by independent researchers, and legal challenges in courtrooms.
As one example, agents of the UK’s MI5 security service are subject to review by:
- Internal MI5 supervision
- The Investigatory Powers Commissioner’s Office (IPCO)
- The Intelligence and Security Committee (ISC) of Parliament.
Public support for these surveillance systems depends on confidence that what is happening behind the scenes is in line with the external description of these systems. Saying one thing but doing something very different under the cloak of secrecy is a recipe for damaging public trust. Any such duplicity could cause “trustable monitoring” to switch, instead, to “contested, despised monitoring”. It’s all the more reason to insist on the highest standards of integrity in the conduct of such activities.
The people involved in surveillance systems, especially at senior management level, must be beyond suspicion: they should be recognised as carrying out their tasks, not for political or other ulterior purposes, but in line with principles that command bipartisan support.
That implies a higher level of trustworthiness than what is often observed in politics. For fuller discussion about the potential trustworthiness of the political process, see the next chapter.