By Dr Roland Schiefer, author of “All In The Mind”
Our near future will be dominated by the explosive development of Artificial Intelligence (AI) systems. These systems will affect our physical world by driving cars and controlling robots. However, online AI tools that are continuously accessible through mobile phones and wearable interfaces might have an even stronger effect on our social and political structure, because they will change the way we think and act. Groups that can quickly and effectively highlight the opportunities and threats of new AI tools will therefore have considerable impact on politics, development and user behaviour.
Example of a digital assistant: Microsoft Cortana
AI tools are “narrow” artificial intelligence systems that do complex things like interpreting voice input or analysing texts very fast. They do not have consciousness or a will of their own. The best known examples for such tools are Siri, Google Now and Cortana made by Apple, Google and Microsoft respectively. These voice-based “digital assistants” can handle simple conversations and daily administration tasks like scheduling and booking tickets. They can also answer questions by accessing search engines such as Google or Bing. Digital assistants are now mainly accessed from mobile phones, but Microsoft is introducing Cortana as an optional interface for Windows 10, opening the scope to laptops and desktop systems. The more convenient and reliable these assistants become, the more users will tend to see their own assistant as the obvious way to access the digital world. Successful producers of digital assistants will try to enhance this trend by creating a “walled garden”, a service environment that is so richly stocked with easily accessible options that most users will not bother to look outside. We can expect Siri travel services and Cortana tax return. However, providers of smart services such as travel agencies and accounting firms will try to strengthen their own brand identity. Perhaps we will get an HSBC tax return service that can be accessed by any of the popular assistants – degrading the assistant to the role of a smart browser. Some companies might fund the development of open assistants that leave all the branding to the actual service provider. AI tools are ready for the mass market.
The development of AI tools proceeds at an extreme speed. Some of the richest companies in the world, like Google, Apple and Microsoft, are putting much of their profits into AI systems. They hire tens of thousands of the brightest, best educated and most motivated people to shorten product cycles. Billions of users are eagerly waiting for new versions of AI tools such as Google Now, Siri and Cortana to make their own lives easier, more exciting or more profitable. They produce a flood of feedback concerning new uses and suggested improvements. They are also willing to have their purchasing decisions influenced in return for this service, so they generate huge profits that are in turn invested to improve AI tools. This process creates a continuous proliferation of new, potential applications. Only a few of these applications have yet been built and only a fraction has even been considered. Most of them might remain unexplored, because the initial applications we use tend to influence what we want next. Minor decisions in the early stages will therefore lead us along a path of development on which some highly desirable options are no longer feasible or negative outcomes can no longer be avoided.
We cannot plan this development in a conventional way, as we know little about the opportunities and threats we will have to cope with. We can only try to reach some consensus about the general outcomes we would prefer and then try to nudge the development process in a direction that makes these outcomes more likely. This has some similarities with the way a catalyst works in a chemical reaction: It makes certain reaction steps more likely. AI development might be ”catalysed” by adjusting the legal framework, changing the rules for access to capital, providing reliable, evidence-based information for decision-makers in the sector or showcasing demonstrator projects to accelerate specific developments. The Government will certainly play an important role, as it represents public interest, has access to funding and can change the legal structure. We can also expect that some universities and foundations will establish “Catalyst Teams” that aim to make preferred outcomes more likely. The impact of such team will increase with the following capabilities:
- Involve representatives of various stakeholders such as politicians, civil servants, platform developers, businesses, pioneering users and activists. Facilitate fast flow information on new developments, opinion polls among members, responses to project concepts, etc. Create relevant consensus faster than normal communication would do.
- Focus on “verified information” that can be accepted by decision makers without delay, i.e. not personal hopes and fears. Base statements on tests where possible. Ensure access to a pool of test users who will work hard to learn certain AI tools and produce measurable outcomes. University students might do this as part of a study module while university staff could manage the tests.
- Use mass media to familiarise a wide range of people with new concepts. Present desirable AI trends in the form of entertaining events. Motivate participants with publicity or price money. The format could extend from a technical or business focus to a brainy version of the “Survivor” series to attract a wider audience.
- Cooperate with developers to fast-track development of demonstrators. Showcasing desirable applications might require customised prototypes at the edge of current technology. Companies such as IBM or Microsoft, smaller software developers or innovative university departments might be motivated by an opportunity to show their products to a wide audience.
- Continuously model the impact of possible legal changes. The law should be used as a catalyst to make things go right and not a remedy to be applied after things have gone wrong.
The following sections describe a few potential applications of AI tools and how Catalyst Teams might accelerate and inspire their development.
A new type of education
In future, AI tools will consistently support us in what we are doing. They will provide us with facts, search for solutions, guide our hands-on work and free us from a lot of unpleasant administration tasks. Some parts of our current education will therefore become useless and we will require skills that are currently not taught in school. We might find that the best performers among AI tool users are not necessarily the best pupils in a conventional school environment.
Particularly teaching by AI tools will open new opportunities. A pupil’s personal AI tool, which could be provided online as a service, could store all knowledge a pupil has already learned. Tests and questions of understanding would then reveal what knowledge she has retained and what type of information she is most likely to forget. Based on the learning progress of other pupils with similar mental characteristics, the AI tool could determine the pieces of information that are most likely to complete a process of understanding or learning a practical skill. Each learning step will make the model of the pupil’s mind more detailed. Obviously, an AI tool would present all information in the best possible way, emphasising graphical format, text, sound, instruction by a teacher or group work according to the pupil’s personal characteristics, the task at hand and the fellow pupils available for interaction.
This style of learning does not have to be done in isolation. A group of children and their teacher might share a virtual environment in which the learning takes place. The AI tools of the individual pupils could communicate to develop the best teaching strategy based on the composition of the group and the abilities of the teacher, who might primarily be responsible for the interpersonal part of the event.
It will certainly be advantageous to promote this new style of learning. Countries that manage it first will have a competitive edge. Universities that can integrate it will attract more foreign students and teachers who excel in it can expect brilliant job opportunities all over the world. Learning with AI tools might particularly benefit currently disadvantaged pupils, as it can analyse and remedy their weak spots in a focused manner.
Parents and teachers will rightly be hesitant. Many new teaching methods and teaching technologies promised to revolutionise education and did not deliver. The AI teaching tools described above are feasible, but they are not ready for routine use. Developers might hesitate to invest heavily in products that might not be taken up. Catalyst Teams could therefore accelerate this process by creating bridges between the different stakeholders.
Imagine a school agrees to have a pilot AI teaching lab and have selected school classes use it regularly a few hours a week; an online publisher or TV channel agrees to document the progress of the children and provide regular broadcasts; a major developer such as IBM agrees to provide and adapt appropriate AI teaching tools. This would not only create feasible solutions. It would establish a dynamic process in which all participants have an interest to move faster as they would have otherwise done.
Imagine another TV show in which groups of high-school children compete in solving tasks using AI tools through mobile phones or augmented reality gear. Pupils from schools in different socio-economic environments are invited to increase variety. A big high-resolution screen in the background shows the viewer what the contestants see on their mobile screens or their virtual reality gear. Viewers can so keep an eye on the entertaining social interaction while they learn how school problems and real-life problems can be solved by using AI tools.
Some of the challenges could be school-related. One can expect that many people would like to know how easy it is to ace conventional A-level exams when connected to AI tools. The groups might also be asked to design a product that is later printed on a 3D printer and has to be put to practical use. Pupils could show how well they can diagnose the illnesses of people who volunteer for this purpose. Contestants might be asked to start and run a company with the help of their AI systems. All this would provide the adults with a gentle introduction to how their own world is changing right now.
AI tools reduce the minimum number of people required to advance a civilisation from our current level. These tools can store all of our knowledge and skills and teach them to users as required. Generalists with AI tools could then do specialist work. A general mechanic with augmented reality gear, for example, could repair almost any device including motor cars, diesel trucks, domestic appliances, airplanes and mobile phone masts. A self-sufficient group would therefore not need many mechanics. The same applies to surgeons, researchers and any other kind of specialist. Maintaining a specific level of technology therefore does not require a minimum number of experts who can hold all the relevant knowledge of our time in their heads. Scientific progress will obviously be related to the number of well-educated and well-equipped people who can contribute to it. Smaller societies will therefore develop more slowly and those eager to exchange information will grow at a faster clip.
AI tools could also reduce the volume of trade needed to ensure survival. Much of our global interdependency results from our need for industrial products that are made cheaper elsewhere, raw materials only found in foreign countries or food that does not grow in our region. 3D printers could make most of the products we need, although usually at a higher price and a lower quality. High levels of recycling or alternative product designs could reduce our dependence on raw materials. It will then still be advantageous to trade, but as a matter of economic efficiency, not as a matter of survival. In the event of a disaster, countries, regions, cities and even small bands of individuals could continue on their own.
Imagine an ongoing experiment designed to test how well a small group of dedicated participants might do in sustaining civilisation. Approx. 20 students might live on a piece of land dedicated to that purpose. Their experiences might be shown in regular TV episodes. The average student might stay 9 month, so that participation can be arranged during a gap or practical year. Students might get credits for participating. Universities would benefit from the exposure. Companies making AI tools, 3D printers and other equipment might sponsor the event.
The event should involve practical challenges so that TV viewers have fun watching it. Contestants might start off in tents and then build houses guided by their AI systems. They might generate their food in a microbiological process using waste and solar energy. That will make for interesting cooking experiences. They might have to grow crops. Should they be provided with a tractor or should they have to work with shovels until they can 3D-print their own tractor?
Participants might be required to complete a study module during that year to show that new knowledge was transferred. They might be required to do some research work to show that overall knowledge has been increased. Step by step, the conditions could be made more stringent. Can they print their own mobile phones? Can they print server computers that run their AI software? Can they develop new AI software? Can they print printers? Can they do genetic engineering? Could they really go it alone?
Such a project would not only provide entertainment and a lot of commercial spin-offs. It would change the way we see ourselves and our civilisation. AI tools might turn us into a network of interacting subunits that can easily rearrange themselves in response to technological, environmental or political changes. That would make our society far more resilient against shocks or disasters of any kind and have obvious relevance for national defence.
How to make AI tools more trustworthy
We can expect that authoritarian regimes all over the world will use the opportunities provided by AI tools. They will create their own digital assistants with an ecosystem of AI tools around them to provide their citizens with convenient cradle-to-grave support in a regime-friendly way.
Even in currently democratic societies, there is no “objective” way of building analytic systems, just as there is no “right” way of writing a newspaper article. Any analysis of data or text by man or machine will necessarily require a wide range of assumptions. Systems that provide technical answers will rely on current scientific theories, some of which will turn out to be wrong. Economic analysis will necessarily be based on theories about the way markets work or the way vested interest groups play their games. Most AI tools will have an underlying ideological bias. The best way to cope with that is to make this bias visible and give users a choice.
Digital assistants that support choice must be “honest brokers” that allow users to choose what sources are used. Some users might only want to use data sources that have already been used in peer reviewed journals. Some might only want to use legal analytics engines built by major accounting companies. Some might want all their sources to be approved or previously used by one of the NATO governments. Others might prefer the Chinese government or a large charitable organisation.
Catalyst Teams could help during the initial stages of this development by arranging events in which digital assistants are compared with each other. Are they all “honest brokers”? Can the user track the information and models on which the assistant’s answers were based. How much is the scope of services restricted by routing users to preferred business partners? How much effort is made to keep data confidential?
This is likely to cause conflict. The revenue stream for platform developers, the makers of digital assistants, might be linked to keeping users within a “walled garden”, i.e. sourcing services only from a group of preferred suppliers who support each other. Many users will object to having their choices made for them. Developers, in turn, might argue that the first of them who really breaks down the garden walls will suffer the highest financial loss.
Catalyst Teams might encourage fair competition and free choice by promoting legislation that forces all platform developers to reveal all sourcing choices of their digital assistants and to make it easy for users to set their personal preferences. The same analysis performed with different clusters of AI tools will often lead to different results and that will in turn trigger studies about the nature of these differences. When tools do not behave as intended, developers will change their structure or look for better concepts to base them on. Political, philosophical or social concepts, for example, which have previously only been compared in scholarly discussions, will then have to prove their value as functional components in tools. This should speed up the development of concepts and put our civilisation on steroids. Countries that allow it to happen will grow in strength. We can hope that authoritarian regimes, obsessed with protecting current privileges, are likely to lose out. If we can indeed arrange our systems in a way that makes open structures more competitive, we will have more reason to trust these tools.
In the middle of the nineteenth century, wealth was very unevenly distributed and our economic system looked rather unstable. Factory owners could afford to pay the seemingly unlimited masses of workers just enough to survive and reproduce. Karl Marx predicted that a competitive market would force companies to produce ever more goods at increasingly lower prices – goods that the workers could not afford and the rich classes were no longer able to take up. Then the system would collapse leading to a revolution.
Things did not happen that way. The masses of workers were not unlimited. Supply dried up and they had more opportunity to bargain for a larger share of the cake. An increasing number of workers got a higher education and could therefore work more productively. They were valuable and short in supply, so they could demand even higher wages. Step by step, educated workers on different levels acquired a larger share of the wealth. They became the consumers that the system had been lacking.
This new class of educated workers gained in influence and shaped the political system. Sometimes they allied with the poor and deprived to wrestle more privileges from the rich. Sometimes they allied with the rich to preserve the current order, as they now had houses and pensions to lose. And they shaped expectations. Good education and hard work were bound to ensure lifetime employment and a rather generous income.
Things did not happen that way due to automation. Smart machines can out-produce and out-administrate humans in an increasing number of fields. This has reduced the demand for people with average intelligence, education and motivation, who have generally not seen an increase in their purchasing power over the last three decades. Only top performers are disproportionally rewarded. Machines are also starting to replace people in jobs that are still seen as pinnacles of personal achievement, such as medical doctors and pilots. The political bargaining power of educated citizens is therefore waning and the influence of those who own smart machines is increasing. This reduces the stability of our current political system.
Catalyst Teams should attempt to counter this trend by building on the spread of AI tools, as these tools will generally make people more productive. Productive employees can charge higher wages and subsidising AI-based professional skills enhancement would accelerate this process. Budding entrepreneurs will find it easier to start their own companies, because AI systems will take most administrative chores off their hands. Legislation that interlinks with the capabilities of AI systems could make it even easier to found a company, gain access to financing, or handle administration. Higher income and the optimism associated an economic upswing are likely to lead to higher investment and higher consumption. We can expect fast economic growth that benefits a large part of the population and therefore reduces the current income inequality.
Considerable resistance will have to be overcome, because improvements for a large number of people will often be linked to disadvantages for some. Running a company, for example, will be much easier when most tax and accounting issues are handled by an AI tool. This will invariable mean fewer jobs for tax consultants and accountants. Well-educated and well-organised professionals will try to stall developments that disadvantage them. Legislators will have to counter vested interest groups in the interest of the community.
AI tools will allow individuals to create considerable value outside the formal economy. Conventional economic indicators based on the formal economy might therefore show that things are going down while everybody creates more value and actual consumption is going up. Imagine a house owner who gets new windows installed. He gives a job to somebody else and increases GDP, but only few house owners will do that, because professional services are expensive. Imagine another house owner who installs his own windows by using an AI system. All work steps are shown on an augmented reality interface, allowing him to do work he would otherwise not have managed. He is a pensioner who is glad to do work that gives him purpose and pride, so no opportunity costs need to be budgeted. Assume his story inspires many other do-it-yourself customers who would never have considered contracting a company. Their joint purchases of material might lead to a moderated GDP increase while the actual increase of the value generated is much higher. AI systems might in this way activate considerable amounts of “hidden labour” among the increasing number of pensioners and people on welfare or basic income. Catalyst Teams might help this development by popularising appropriate economic indicators that show how well we are actually doing.
Transforming the legal jungle
Repealing laws takes effort for politicians. Laws that make life a bit more difficult for everybody without any real benefits are therefore likely to stay around, because lawmakers would not get political rewards for removing them. Laws that create one big winner, who naturally supports them, and many small losers, who are often not even aware that they are losing out, are likely to get passed and stay around. Laws that make the proponent look good, like safety regulations, are likely to pass and stay around, even when they are actually just favours for vested interest groups. Who wants to stand up and be against safety? These are just some of the reason why the legal jungle grows. Some economists worry that our society might suffocate itself under a blanket of overregulation.
Specialist lawyers can navigate through parts of the jungle to help their clients. Naturally, they want to keep the jungle as dense as possible to make the help of lawyers indispensable. As far as their profession is concerned, they are fighting a losing battle. Almost half of the legal costs have previously been incurred by “discovery”, a process in which junior lawyers sift through millions of pages of documentation to find relevant passages. This process has largely been taken over by IT tools, reducing the demand for lawyers and the fees that clients are willing to pay. Registrations for law degrees at US universities have dropped by 40% and AI developers are trying to increase the automation of legal services. Liverpool University, for example, has teamed up with Riverview, a law firm, to see how far AI development skills can be used in a commercial law firm. There might be some parts of the legal profession that are fiendishly hard to automate, but there will be some low-hanging fruits that can be picked first.
Activists with legal skills could, for example, search for laws that appear to be there for the common good, but only protect the privileges of a few while putting a burden on a large number of others. They could provide support for their point of view by using a legal analytics tool and demand that this law be repealed. Opponents would only need a single legally trained person and an AI tool to respond, so insisting on a response in due time would not be unreasonable. If the law cannot be defended, lawmakers will be under pressure to repeal it.
It might be sensible to focus first on repealing laws that do not need replacement and then extend the scope to laws that are easy to replace. Once the principle is established, we can expect a variety of activists that demand to repeal laws that are useless, do not serve their purpose, favour vested interest groups and so on. Step-by-step the legal jungle might be turned into a park with broad walkways and sign-posted paths.
Catalyst Teams could accelerate this process by showing how much is already possible and highlighting where this development might go. They could also establish links between innovative politicians, university law departments and legal AI tool developers to pick a few promising legal areas to start the clearing process.
Open AI tool development
AI tools will soon be key elements of our societies. If only few software producers in the world were able to make such AI tools, economic growth would be hampered, these companies could extract considerable rent from commercial and private users and small societies could never become resilient, as the main tools required to make them productive would come from an outside source. It is therefore essential to ensure that the basis of AI tool creators is as broad as possible.
Much of the AI tool creation relevant for economic growth may be based on extending existing tools. Somebody may create an AI tool for repairing domestic appliances with augmented reality guidance. Somebody else might extend that tool to repairing vintage cars. A third party might extend it to handle minor surgery. However, all of them would still depend on the original product. Genuine variety and resilience is only possible when a sufficient number of players can create new AI tools from a very low level. This can be imagined as follows:
A developer provides his “extractor” AI tool with information on what a new AI tool should be able to do. That can be done by speaking, writing, drawing or guidance of internet searches. The extractor builds a logical structure and continuously checks whether this structure is consistent and complete. As long as it is not complete, the extractor will ask for more, taking previous input, background information and information about the developer into account. The result is a complete specification.
The developer should then be able to feed this specification into an automated compiler, an AI tool that turns the specification and information on the nature of the hardware on which it is to be used into a working product. Only when those compilers are good enough to become the choice of established software developing companies will this bottleneck have been overcome.
Ideally, any graduate with specialist knowledge and a viable vision should be able to use the knowledge extraction process to create a new AI tool. The process should not require thousands of highly specialised AI-tool-related terms that are not generally understood. However, AI tool creation will involve some concepts that are not yet part of the way we think. And it might show us that some of our current ways of thinking are quite inefficient. General education will have to be adapted accordingly.
Catalyst Teams might help to pinpoint the risks resulting from dependence on a small number of AI tool developers and encourage solutions. It is obvious that currently successful AI developers will not appreciate the idea of automating the software development process, as this would change the environment in which they prosper. One can rather expect them to provide a flood of arguments that makes such an attempt look inappropriate, premature or misguided. Catalyst Teams might therefore encourage other players, e.g. universities or open source groups to tackle this problem.
Spy versus spy
A mesh of civilian and military cyber-security is becoming increasingly important for our societies. This was illustrated in 2010 by the Stuxnet computer worm, which was allegedly produced by US and Israeli agencies. The worm spread via USB flash drives and communication networks. It sought out the Siemens programmable logic controllers in the centrifuges of the Iranian nuclear programme and made them malfunction. Quite a few centrifuges ended up broken. Since then, it has been common knowledge that cyber war is real.
The use of armed aerial drones in many crises areas of the world makes it evident that war is in increasingly being automated. A look at the smart creations of Boston Dynamics, a company now owned by Google, will certainly confirm that. Such equipment has military advantages, but also creates new kinds of vulnerability. A potential enemy could have started years ago to influence the production process of these robots, so that they can in turn be influenced when the need arises. Naturally, all producers of military robots will try to avoid that, but that takes additional effort.
Wars usually include attacks on civilian structures and our structures are becoming more vulnerable to cyber-attacks. Air traffic management, for example, must handle civilian piloted planes, military piloted planes, civilian drones and military drones. Automated cars are becoming reality and will depend on country-wide information systems that can be sabotaged. Further efforts will therefore be required to defend civilian infrastructure.
Cyber warfare also changes the international diplomatic game. Small groups without state backing could do considerable damage, particularly when they can configure AI tools to help them in their undertaking. Hostile governments can conduct cyber-attacks in other countries without officially taking responsibility. Allies of the attacked country, who do not want to be inconvenienced, will find it much easier to ignore a cyber-attack than a physical attack. This will reduce the extent to which allies can be relied on and implies that each government has to spend proportionally more on cyber warfare, where it might well stand alone, than on conventional warfare, where it can place more trust in the support of its allies.
We can expect that a considerable part of the top graduates in all major countries of the world will work with AI tools that collect data and model outcomes related to defence and security. Billions of AI agents are likely to spread on the internet, in the form drones or as smart dust, collecting data on how people behave, how gadgets work and how pieces of infrastructure hang together. Powerful analytics engines on large computer clusters, tightly interlinked with a lot of very smart people, will continuously model how our systems might fail, how an enemy might use them to do harm, how such harm could be prevented and how one could interfere with such systems to strike back at an enemy.
Some people will argue that this arms race can be avoided in some way. Assuming that we have to deal with it, what would be the rules of this game?
- Civilian, military, and private security are obviously interlinked. A country that can integrate them will get strong defence at a good price. A country in which the military, companies and citizens are at loggerheads will be weak and waste money. Cyber defence should have a face that citizens can like.
- Any single agency tasked with cyber defence would easily be the smartest and most powerful organisation in the country, creating obvious problems with democratic control. Outsourcing cyber security work to private companies, universities and foundations would reduce the problem to some extent, increase trust and spread the considerable knowledge created this way, because the need to defend our society will teach us a lot about how it actually works.
- In order to avoid a central point of failure, the cyber defence efforts distributed over many suppliers should be coordinated in parallel by several centres of comparable influence that monitor and supervise each other. These centres should be arranged in a resilient network that ensures operation according the given rules even when one of the centres fails, i.e. turns inactive or hostile.
Catalyst Groups that work outside the established government security structure and have no specific loyalty to any of the players might be able to develop and advance promising solutions.
The development of AI tools is a fast, dynamic process in which small initial changes will have a major effect on the outcomes. Catalyst Teams that can provide reliable information for decision makers by clarifying innovative concepts, testing them in pilot studies and distributing the results in an effective manner might therefore have a considerable impact on this process.
Politicians might contribute to Catalyst Teams as participants or sponsors and use them to filter or amplify ideas and use moderate resources to maximum effect.
Transpolitica is carried by future-oriented people with a wide range of professional backgrounds and could therefore contribute to establishing such a Catalyst Team.
The article above features as Chapter 4 of the Transpolitica book “Anticipating tomorrow’s politics”. Transpolitica welcomes feedback. Your comments will help to shape the evolution of Transpolitica communications.