This page contains the current draft of the full text of Chapter 15 of RAFT 2035. All content is subject to change.
To offer comments and suggestions on the following material, please use this shared Google document.
15. Politics and AI
Goal 15 of RAFT 2035 is that parliament will involve a close partnership with a “House of AI” (or similar) revising chamber.
To restate the goal: politics will benefit from a close positive partnership with enhanced Artificial Intelligence.
Politics features both the strengths and the weaknesses of humans, writ large. When we sometimes complain that our politicians are stupid or selfish, we should remember that humans in general can be stupid and selfish – as well as, on occasion, being wise and selfless. The difference is in the degree of power that politicians can possess.
The dangers of power
Power tends to corrupt, warned Lord Acton, the nineteenth century historian and politician. Absolute power corrupts absolutely. Power can corrupt the clarity of a politician’s thinking, and their sense of moral duty. It can lead them to forget their social ties to their fellow citizens. It can cause them to imagine themselves as being particularly worthy and deserving. It’s little wonder that initially admirable politicians often go downhill over time.
If power tends to corrupt, what is truly worrying is that never before have we humans held so much power in our hands. Science and technology are providing us with spectacular capabilities. We face the threat of unprecedented large scale surveillance and manipulation by forces seeking undue influence. This manipulation can be subtle rather than blatant. That’s what gives it greater power. New technology also strengthens those who would wield fake news and other black-art psychological techniques to frighten or incite people into making choices that are different from their actual best interests. All this raises the spectre of politicians taking and holding power more vigorously than ever before.
Checks and balances, under threat
In principle, what limits politicians from abusing power is the set of checks and balances of a democratic society: separation of powers, a free press, independent courts, and regular elections. The effectiveness of elections depends, however, on electors being able to see matters sufficiently clearly, and to assess scenarios objectively.
Too often, alas, we electors prefer to mislead ourselves into believing comforting untruths and half-truths. We prefer reassuring slogans over an awareness that matters are actually much more complicated in reality. We use our intelligence, not to find out what is the true situation, but to find rationales that justify whatever we have already decided we want to do. We tend not to care much whether these rationales and slogans are sound. We care more that these slogans bolster our self-image, and raise our perceived importance inside the groups of people with whom we seek to identify. That’s because we are, thanks to human nature, motivated too often by fear and by vanity.
Clever social media communications seek to push us into emotional reactions rather than careful deliberation. With our hearts on fire, smoke gets in our eyes. With our emotions inflamed, online interactions frequently propel us to champion tribal instincts. With a heightened sense of the importance of group identity, we cheer on pro-group “blue lies” rather than respecting objective analysis. Afraid of having to admit we were previously mistaken, we double down on our convictions, in effect throwing good money after bad. We may succeed in ignoring reality – for a while, until reality bites back, with a vengeance.
Two sets of tools are available to prevent us continuing to misuse our individual human intelligences:
- Collective intelligence, where people help each other to reason more thoughtfully.
- Artificial intelligence, with automated reasoning and data analysis.
Both these sets of tools will be deeply important in the years ahead. The tools interact, raising possibilities for faster progress. At the same time, we need to be aware that these tools are, in their own way, capable of bad outcomes too:
- Collective intelligence can produce collective stupidity.
- AI can help people and corporations pursue dangerous goals more quickly than ever before.
We’ll need to keep our wits about us!
The rise of AI
AI systems are quickly increasing in use around the world as decision support tools. For example, they provide support for medical decisions, legal reviews, assessment of credit worthiness, identifying the most suitable candidates for employment, and suggesting new partners for romance and intimacy. Software tools can highlight mistakes in spelling and grammar, awkwardness in style, and wording that is more likely to be effective for particular audiences. Software tools can also flag up instances of mistaken facts, questionable sourcing, and the likelihood that some material, such as a video, has been fabricated or manipulated from its original content.
Before long, AI and other decision support tools should be able to provide very useful analysis and validation of political statements, including legislative changes that politicians are proposing. AI could identify potential problems with legislation sooner, and suggest creative new adaptations and syntheses of earlier ideas. AI can also alert us when we are becoming tired, bigoted, or selective in our use of evidence, and can recommend more fruitful ways to continue a discussion. This AI, therefore, could help us to become, not only cleverer, but also kinder and more considerate.
However, AI systems are prone to various amounts of bias, misunderstanding, quirks, and other faults – some of which are very serious. People who use these tools sometimes put too much trust in them, without independently assessing their recommendations. Another risk, identified by writer Jamie Bartlett as “the moral singularity”, is that people will lose their ability to take independent hard decisions, through lack of practice, on account of delegating more and more decisions to AI systems. With atrophied moral intuitions, people will unintentionally become dominated by the moral decisions made on their behalf by machine intelligence. In short, there are many potential pitfalls ahead!
Accordingly, this RAFT goal seeks to obtain significant benefits of AI decision-making, whilst managing the significant risks of mistakes from these systems, and the risks if these systems are configured to serve the needs of only a small subset of human society.
There is no suggestion that AIs will have overall control of key decisions. Instead, what is envisioned is a productive partnership between human intelligence and machine intelligence, in which the final decision rests with human politicians.
The House of AI
The relationship between human politicians and the envisioned “House of AI” would follow the existing model between the House of Commons and the House of Lords: the Lords can revise and amend legislation originating in the Commons, but the Commons has the ability to override recommendations from the Lords.
What’s more, just as the Commons can at present take account of ideas from groups of Lords in formulating new legislation, human politicians will in the future take ideas from the House of AI into consideration when drafting new political measures. As a further step, the House of AI could suggest a set of different political measures, along with an assessment of the pros and cons of each choice, for human politicians to take the final decision.
Compared to the present processes, the result with the House of AI involved should be better political legislation, understood more widely, and passed into law considerably more quickly (and therefore in a more timely manner).
For the House of AI to succeed, a number of points should be followed:
- All algorithms used by the House of AI will need to be in the public domain, and to pass ongoing reviews about their transparency and reliability.
- Opaque algorithms, or other algorithms whose model of operation remains poorly understood, will need to be excluded, or evolved in ways addressing their shortcomings.
- There will likely need to be public funding allocated to develop these systems, rather than us waiting for commercial companies to create them.
- Indeed, the House of AI cannot be dependent on any systems owned or operated by commercial entities. Instead, it will be “AI of the people, by the people, for the people”.
To accelerate progress with Goal 15, two interim targets for 2025 are proposed:
- Reach an agreement on limits on the roles that can be played by commercially owned AI. This agreement should recognise the potential large contribution that could be made by commercially owned software, without being naive about the risks.
- Reach an agreement on the principles of “ethical AI”: what are the features which an AI could be built to include, but which will need to be excluded or curtailed, for the sake of true human flourishing?
These two agreements will both play a central role in the future evolution of RAFT. Different people and organisations have strongly divergent views about the scope and scale of such agreements. Obtaining consensus will require an honest and full consideration, not just of machine intelligence and machine goals, but also of human intelligence and human goals.
In other words, true progress is unlikely to be made with the question of “ethical AI” unless true progress is also made with the question of “ethical humanity”.
We can hardly expect to obtain the best results from the machine intelligence of advanced software systems unless we figure out to obtain the best results from the different kind of “machine intelligence” that is displayed by the market system of profit-seeking corporations. Unless we know how to anticipate and remedy potential huge market failures, then what lies ahead will be huge AI failures – AI that serves, not the best aspects of humanity, but the worst aspects of humanity.
Although the 15 goals of RAFT 2035 are wide-ranging, they fail to include some important ideas about future possibilities. The next chapter covers some of the concepts in this “bubbling under” category.
For more information
- The 2019 book by Rana Foroohar, Don’t Be Evil: How Big Tech Betrayed Its Founding Principles — and All of Us
- The 2019 book by Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control
- The 2019 book by Roger McNamee, Zucked: Waking Up to the Facebook Catastrophe
- The 2019 book by Tom Chivers, The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World
- The 2018 book by Kevin Werbach, The Blockchain and the New Architecture of Trust
- The 2018 book by Shoshana Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power
- The 2018 book by Jamie Bartlett, The People Vs Tech: How the Internet Is Killing Democracy (and How We Save It)
- The 2018 book by Jamie Susskind, Future Politics: Living Together in a World Transformed by Tech
- The 2018 book by Thomas W. Malone, Superminds: The Surprising Power of People and Computers Thinking Together