Artificial Intelligence in the UK: Risks and Rewards

AKThe following report was created by Transpolitica senior consultant Alexander Karran in response to the ongoing inquiry into robotics and artificial intelligence by the UK parliament’s Science and Technology Committee.

The report was submitted on behalf of Transpolitica, to address the topics listed on the Science and Technology Committee inquiry page:

  • The implications of robotics and artificial intelligence on the future UK workforce and job market, and the Government’s preparation for the shift in the UK skills base and training that this may require.
  • The extent to which social and economic opportunities provided by emerging autonomous systems and artificial intelligence technologies are being exploited to deliver benefits to the UK.
  • The extent to which the funding, research and innovation landscape facilitates the UK maintaining a position at the forefront of these technologies, and what measures the Government should take to assist further in these areas.
  • The social, legal and ethical issues raised by developments in robotics and artificial intelligence technologies, and how they should be addressed.

The author thanks members of Transpolitica and the Transhumanist Party UK for their feedback on previous drafts of this report.

Note: click here to access the entire set of submissions accepted by the Science and Technology Committee.

Robotic handshake

Executive Summary

This briefing introduces Artificial Intelligence (A.I) as it is applied in industry today, and outlines what the United Kingdom can do to take full advantage of the technology. The briefing covers four areas the Implications of robotics and artificial intelligence for the UK, gaining and maintaining primacy in A.I technologies, the social and economic opportunities afforded by A.I technologies and issues in developing robotic and artificial intelligence technologies.

  • A.I is fast becoming an integral part of everyday life. In the coming decades, its integration into the digital ecosystem will be such that almost all technology will have an “intelligent” component.
  • Advances in A.I, robotics, technology and the sciences are approaching an exponential curve due to convergence and driven by information technologies.
  • A.I, robotics and automated processes are highly likely to displace vast amounts of the labour force within the next two decades, potentially 15 million jobs are amenable to automation, by either robotics or software, and cover an ever-broadening range of occupations.
  • Changes in the distribution of the capital-labour ratio will lead to a hollowing out of low, mid and high skill workers.
  • Re-deployment to the work force after a period of re-education and skill improvement may not be possible due to the increased pace of change, requiring a radical rethink of what it means to learn and work in a rapidly evolving digital economy.
  • A basic income should be investigated to offset the reduction in employment opportunities allowing for greater social mobility, a basic standard of living and reduced perception of inequality.
  • There needs to be a radical reform of the national curriculum and educational system to focus above all else on the creative use of technology from an early age, potentially as early as year two and becoming more intensive by year twelve.
  • A.I technologies can be used in the education system to improve the delivery of materials, subject matter and acceptance of A.I. The integration of A.I tools into education is not something that needs to be developed from first principles, but can draw upon the existing field of A.I.Ed
  • Educating citizens at all levels of society about the effects of A.I upon the UK will prove to be the biggest challenge, if the UK is to gain primacy in this area.
  • Defence applications of A.I, robotics and automation require serious consideration and rapid development if the UK government is to vouchsafe its citizens and allies. However, there are ethical and moral considerations and boundaries to be reflected upon.
  • If the UK can respond fast enough and commit to investing in education, digital infrastructure and redeployment of research funding, then coupled with existing industrial and research frameworks the UK is well placed to reap the benefits of becoming a leading player in the A.I domain.

1.    Introduction

  1. Not since the dawn of the industrial revolution in 1750 has there been a period in history that has so radically altered society, economic growth and technological development. Since the start of this revolution, there has been a rapid but linear pattern of growth and development resulting in three distinct eras. The first era was the industrial revolution (mid-18th century), the second was the period of mass industrialisation (mid-19th century), which has now slowed and the third is the Information Technology (I.T) revolution which began in the latter half of the 20th The I.T revolution however, has broken this trend of linear progress and set in motion a period of exponential growth and development. This new revolution, which comes fast on the heels of the previous one, has been termed the fourth industrial revolution[1] or the second machine age[2].
  2. Thus far, this new “digital age” has been characterised by wide adoption of the internet and the creation of so called “cyber-physical” systems -replacing traditional infrastructure with digital technologies- and by convergence, in which the reliance and co-dependence on data driven processes provided by information technologies, is blurring the traditional divisions between scientific and industrial domains. This co-dependence is accelerating scientific and industrial progress in many areas, such as genetic engineering, regenerative medicine and automation driven by advances in robotics and algorithmic control or Artificial Intelligence (A.I).
  3. Advances in A.I have arguably been the principal technology contributing to current progress and one that is evolving in near real-time. A.I is set to become the most advanced technological tool developed by man, since the discovery and use of fire. There are many definitions of artificial intelligence, such as Machine Consciousness, Narrow Artificial Intelligence, Machine Learning, Strong Artificial Intelligence and Artificial General Intelligence. However, for the purposes of this briefing, the focus is upon narrow artificial intelligence applications as these are currently heading into mainstream deployment.
  4. A.I loosely defined, is a set of statistical tools and algorithms that combine to form, in part, intelligent software that specializes in a single area or task. This type of software is an evolving assemblage of technologies that enable computers to simulate elements of Human behaviour such as learning, reasoning and classification. Some examples of A.I include classification algorithms, used to classify images on social media platforms, text mining algorithms and junk email identification to more complex examples used in computational biology and drug discovery, social network analysis, human gameplay and healthcare analytics (such as Google Deepmind and IBM Watson).
  5. In this briefing, we will outline the connotative potential of A.I as a tool to effect radical social change, financial stability and growth and an enhancement or declension of human existence. Our aim is to rationally consider the negative implications of A.I based on available evidence and opinion, while at the same time emphasising the benefits of the technology when ethically applied so that every UK citizen can be given the opportunity to live better than well, which is our mandate.

2.    Implications of robotics and artificial intelligence on the future UK

  1. If the 1st industrial revolution was characterised as a race between technology, labour and education, the 4th may well be categorised as a race against technology that replaces both brains and brawn. This next race has serious implications for education, especially given the current plans to replace state schools with a broad, enforceable national curriculum, with academies that focus on education as product. However, the greatest area of impact will be upon employment, recent research completed by the Bank of England[3] amongst others[4] shows that up to a third (potentially 15 million) of jobs are amenable to automation, by either robotics or software and cover an ever-broadening range of occupations.
  2. The eventual effect of automation will be the creation of an “autonomous economy[5]” in which digital processes talk to other digital processes and synergise to create new processes; this will allow industries to employ fewer workers, yet complete more work. Traditional trend mappings of the economic landscape (i.e. Neo-Capitalism) point to a trend in which, as the automation of labour increase so too does job creation (after an initial period of rapid falloff and recovery). However, there is now a growing body of evidence to suggest that this time things will be both quantitatively and qualitatively different[6], and that this difference is due to wider applications of Moores Law, beyond electronics to machine learning and information science. This new trend presents as an exponential growth curve that will see automation applied beyond physical labour to more cognitive labour[7].
  3. As this trend manifests over the coming decades it will lead to a radical redistribution of the capital-labour ratio by adding a new vector, that of the robot, forever changing the distribution of resources amongst the various strata of society. In certain occupational domains, human labour is likely to continue for technical, economic and ethical reasons. On a technical level, machines today remain inferior to humans at jobs involving creative, highly flexible or affective work and those tasks that rely on tacit rather than explicit knowledge. It may be that it is simply not economically feasible to replace workers in these areas, or ethical as in the case of those providing secondary, tertiary health or palliative care.
  4. The coming changes will lead to a hollowing out of low, mid and high skill workers, who would (based upon previous models) re-deploy to the work force after a period of re-education and skill improvement. However, in the new digital ecosystem, re-education will only take an individual so far, as their ability to adapt to rapid advancements decreases with the increased pace of change. This will require a radical rethink of what it means to learn and work in a rapidly evolving digital economy.
  5. Providing education and income to citizens who have or are likely to lose employment due to automation will prove to be one of the greatest challenges in the decades ahead. To address this challenge and help manage the social and economic impacts, government should engage with the public in open conversation and wide media outreach about the likely impact of A.I. Answering key questions such as how will the nature of work change? What types of jobs are likely to be automated? If there is no work, how will I support myself or my family? What can I do to find work? What will there be for my children? Such answers as provided need to be unambiguous and clear-cut and guide the public towards those industries with less probability of automation and towards education and skills training.
  6. In addition to the implications for the economy, employment, and education, serious consideration needs to be given to the military applications of this technology, in both a global and domestic context. These considerations need to take into account defensive, offensive, automated and autonomous perspectives. International debate is already fuelling calls for a prohibition on the deployment of autonomous systems in the theatre of war on ethical grounds (Red Cross[8], Human Rights Watch[9]). The HRW stated a position with some merit:

 “A requirement to maintain human control over the use of weapons would eliminate many of the problems associated with fully autonomous weapons. Such a requirement would protect the dignity of human life, facilitate compliance with international humanitarian and human rights law, and promote accountability for unlawful acts”

  1. However, given the changing definition of warfare in the modern digital age, “fully autonomous weapons” in this context may not take into account “software as a weapon”. This was demonstrated by the Stuxnet[10] malware deployment, which highlighted a radical shift from “traditional” warfare and hacking to cyber-physical attack engineering, and this form of attack will only increase in frequency[11] and sophistication as our technologies and infrastructure systems become smarter. Therefore, the debate must be expanded beyond battlefield deployment, to include a review of autonomous systems deployed at all points within the digital ecosystem.

3.    Gaining and maintaining primacy in A.I technologies

  1. In order to gain and maintain leadership in the application and development of A.I technologies the UK must concentrate its efforts to varying degrees in three areas, education, research and industry. The area requiring the highest degree of effort, but potential maximum return will be education. There needs to be a radical reform of the national curriculum that focuses above all else on the creative use of technology from an early age, potentially as early as year two and becoming more intensive by year twelve. In addition to performing “traditional” learning tasks such as reading and writing, technology should be integrated as an additional support tool, allowing other cognitive tasks such as mathematics and the sciences to be reconceptualised to support the step changes in learner outcomes that are required for modern life and the digital workplace.
  2. Traditional educational practice has thus far focused on developing core cognitive competencies, such as reading, writing and arithmetic with very little variation in how these are taught and applied beyond the education environment. Current and in-development A.I technologies shows that machines are already making significant strides towards mastery in all of these areas and it our conclusion that creativity should be added to this set of core competencies, a conclusion already supported by others[12]. Arguably creativity, in the digital workplace, can be defined as the ability to ask questions using advanced intelligent technology and utilise the answers to create novel solutions to present and future problems, as such, a superior proficiency in data analysis to produce insights and apply solutions will be a much sought after skill-set in the coming decades.
  3. Technology and A.I can serve a dual purpose in a reformed educational system by being both the facilitator of high quality learner experiences and the subject of those experiences. If in the UK, we could teach the creative use, support and understanding of A.I and information technologies from an early age, we could potentially create a generation of thought leaders and expert practitioners, with the foresight to fully utilise the potential of A.I both at home and abroad in many industrial domains. However, to achieve this, the educational system would need to foster an environment that encourages critical thinking, rational decision making and multidisciplinary approaches to increase creativity and a learner’s ability to synergise knowledge from disparate sources.
  4. The integration of A.I into education is not something that needs to be developed from first principles, but rather something that needs only draw from the existing field of A.I.Ed[13], a field of research already rich with methods and technologies which given opportunity, funding and study for feasibility could be rapidly deployed. This integration could feed into the current government’s plans for academy style schools, with a call for “match fund” proposals that blend A.I.Ed into current teaching practice and change the classroom environment from one that has barely changed in a century into something representative of modern digital life. This has the potential to rapidly evolve the education ecosystem, as incorporating machine learning into teaching and learning styles, would present an innovative “learning” environment that supports the learner, teacher and the A.I system as it learns how best to serve each learner individually according to their needs, providing a level of personalisation heretofore unknown in teaching practice.
  5. While high quality education concerning A.I technologies is needed at all levels of society, if A.I is to be embraced openly and incorporated fully as part of UK infrastructure, a positive feedback cycle needs to be created between citizens, education, industry and government. If initiated effectively this feedback cycle will fuel growth over and above standard measures of GDP, to include: education as product; innovation and entrepreneurship as a commodity to be shared strategically with allies; digital information infrastructure as a service; and if information security policy interactions are non-repressive, cybersecurity services. Furthermore, if A.I technology development is regulated using a light but firm touch, such a feedback loop allows for both secular development and global participation, providing opportunity for the UK to take global leadership in the development, application and commercial exploitation of A.I technologies.

4.    Social and economic opportunities

  1. The number of social and economic opportunities afforded by developing A.I technologies is practically limitless. The development and broad acceptance of the technology within all levels society will lead to advancements in many disparate fields, from healthcare and healthcare provision; critical infrastructure management and resilience; to decision support tools and forensic process automation. For example, advancements in the field of medicine and biology due to the application of current A.I have been truly revolutionary. A.I has allowed the tremendous amounts of data used in research to be analysed faster than ever before. Future developments in A.I, will further increase the pace of medical discoveries, and lifesaving medical interventions, accelerating discoveries in DNA mapping, drug discovery, genetic modification and synthetic biology, propelling the biological sciences to a whole new level.
  2. The UK as world leaders in synthetic biology and the biological sciences, is well placed to take advantage of A.I technologies in this domain, not just through our research frameworks, but also in the future through a reformed education system that incorporates A.I.Ed and penetrates all levels of society, making the use of A.I to complete tasks or reach goals as natural as using a calculator or pen and paper.
  3. Many pundits, experts, economists and capitalists argue that specialized narrow artificial intelligence applications, robotics and other forms of technological automation will ultimately result in a significant increase in human unemployment and underemployment within many fields of human endeavour (Deloitte[14], Financial Times[15], RSA[16]). This significant “hollowing out” of Labour at all levels of the employment ladder may well result in a fundamental shift in UK society, leading to much greater levels of inequality, lesser social justice and a greater potential for social unrest. However, if managed effectively and with all due ethical consideration, the further development of A.I and the concomitant increase in the automation of labour could become a boon to the UK, in terms of increased productivity by reducing the everyday burden of citizens via a basic-income and freeing their creativity and innate empathy. Additionally, providing a basic income allows for the scrapping of large portions of the welfare system as means testing or fraud detection would no longer be required. This would have the effect of increasing social mobility for citizens at a time when it is required, in order to re-educate or re-train to remain a viable prospect in a shrinking employment market.
  4. Thus, while it is increasingly likely that grande-masse automation will reduce the number of employment opportunities, benefits can accrue through an evolved education system that creates new employment opportunities. A.I.Ed could potentially become a large area of economic growth, helping to increase A.I, robotics and automation acceptance in the general populace. Growth would be stimulated based on the sheer number of industrial and cognitive domains required to support the development of educational A.I systems, such as, educators, computer science, Information technology, designers, technologists, infrastructure specialists, content creators and those sub domains that that support them.
  5. There will be a number of other opportunities both economic and social that will come from developing A.I technologies in the UK and advancements from within the industry. For example, in transport management, imagine a transport management system able to respond in real-time to traffic conditions nationally and locally, with the ability to update automated and non-automated vehicles with prevailing conditions and alternative courses of action, ensuring optimal traffic flow and reducing fuel consumption, accidental damage and time-to-travel as a consequence. The same types of system could be applied to critical infrastructure to provide greater resilience and fault tolerance. Another example within healthcare, would be the use of Big Data analytics to provide diagnostic support to GP’s, hospitals and secondary care providers allowing for fully personalised health care, through the rapid diagnosis and identification of disease states and which interventions would work most effectively for the individual, helping to reduce the cost overhead associated with prolonged care due to diagnostic exploration and drug provision.
  6. The possibility exists that A.I could be used within governance, providing evidence and helping to fact check statements and build policy. A decision support A.I could in effect act as a buffer between politician and policy, ensuring that before policy becomes actionable no unintended consequences are likely to arise. This is also an area of active research that could also provide economic and status benefits should the UK encourage its growth.

5.    Issues in developing robotic and artificial intelligence technologies

  1. The biggest challenge the UK faces with regards to developing A.I technologies, is educating the populace in its use, benefits and risks and how fast this information can be disseminated, from those in early learning, attending college or university to those performing jobs soon to be automated or retired, all must be made aware of the coming changes. It will not simply be a case of injecting some A.I subject matter into schools and colleges and hoping that learners and schools adapt, the change to an “A.I mind-set” needs to be systemic affecting all levels of our society. For the UK to prosper an equal focus must be on the practical applications of A.I in addition to creating and understanding the technology. The UK must focus on creating a generation of machine learning practitioners, through early learning and advocating “degree apprenticeships” or vocational certification.
  2. Short-sightedness could make the UK fall at the first hurdle in its efforts to capitalise upon A.I technologies, existing austerity measures could inhibit any effort through lack of funding. Taking advantage of A.I and it development would require a significant redeployment of funds towards those scientific and industrial domains which demonstrate multidisciplinary approaches utilising A.I to provide services nationally and globally or those applying A.I solve problems specific to the UK and its society.
  3. Another challenge facing the UK will be ensuring positive applications of A.I, a balance must be struck between national security needs and personal freedoms afforded to UK citizens, applying fully autonomous A.I to surveillance tasks targeted at citizens is a minefield of unparalleled danger. While the state is tasked with the security of its people, policing thought and action beyond the confines of just law, lies outside of its remit.
  4. A nimble and lean directorate consisting of ministers, economists, scientists and policy experts and futurists should be created, able to respond in short timescales to technological advances in near-real-time this expert policy group should advise upon and revise policy in line with the pace of technological change. Rather than traditional precautionary policy decision approaches this group should adopt a proactionary approach to policy and regulation (i.e. a light touch, but ethically constrained) of A.I in order to reap the benefits that the technology can bring to society and advance understanding of the negative consequences. This group should advise upon or create policy and legislature that is robust enough to adapt to rapid and radical changes, without falling into traditional deny all regulation.
  5. While it is lamentable that we live in a world of warring nation states, unmitigated threats and intractable ideologies, defence is another area in which the nations technological expertise and thought leadership can be applied. Investment in A.I and robotics for national defence is increasing globally[17], and it is within the UK’s best interest to increase research and development in this area in order to keep abreast of the changing nature of warfare. A full analysis of how A.I can be applied to defence ethically and morally is beyond the scope of this briefing. However, artificial intelligence and automated defence could potentially be an area of economic growth and a driver of global stability for this century, much as nuclear weapons and the potential for Mutually Assured Destruction was for the early portion of the previous century, the potential risks of A.I and robotics applied to warfare cannot be overstated.

6.    Conclusion

  1. A.I research and development has an immense amount of momentum behind it, socially, technically and economically. It is not a question of if we should we develop A.I further but rather how fast can the nation mobilise resources in the industrial, educational and civil services to take advantage of this brief period of research and exploration of the technology. The government must make a statement that defines the nation’s role as a leading light and technologically advanced society to be made to be at the forefront of its development in terms of the nation’s ability to prosper and defend itself.



[2] Brynjolfsson, Erik, and Andrew McAfee. The second machine age: work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company, 2014.




[6] Brynjolfsson, E., & McAfee, A. (2014). The second machine age: work, progress, and prosperity in a time of brilliant technologies. WW Norton & Company.












Cyborgization: A Possible Solution to Errors in Human Decision Making?

By Dana Edwards and Alexander J. Karran

Cyborg brain


Accelerating social complexity in combination with outstanding problems like attention scarcity and information asymmetry contribute to human error in decision making. Democratic institutions and markets both operate under the assumption that human beings are informed rational decision makers working with perfect information, situation awareness, and unlimited neurological capacity. We argue that, although these assumptions are incorrect, they could to a large extent be mediated by a process of cyborgization, up to and including electing cyborgs into positions of authority.


In the modern information age governing bodies, business organisations and adaptive systems are faced with ever increasing complexity in decision-making situations. Accelerating rates of technological and social change further compound this systemic complexity. In this complex environment the effects of human cognitive bias and bounded rationality become issues of great importance, impacting upon such domains as political policy, legislature, business practice, competitiveness and information intelligence.

In this text we shall use regulatory capture as an illustration of how human cognitive bias and conflicts of interest interact in the politico-economic space to create disproportionate advantage. We shall also hypothesize a novel potential solution to human cognitive bias in the form of human-machine hybrid decision support.

In broad terms regulation encompasses all forms of state intervention in economic function, and more specifically intervention with regard to the control of natural monopolies. The term “regulatory capture” is used to explain a corruption of the regulatory process. Regulatory capture has both narrow and broad interpretations. The broad interpretation is that it is a process through which special interest groups can affect state intervention ranging from the levying of taxes to legislation affecting the direction of research and development [i].The narrow interpretation places the focus specifically on the process through which regulated monopolies exert pressure to manipulate state agencies to operate in their favour[ii].

What these interpretations express is that regulatory capture generally involves two parties: the regulated monopoly and the state regulatory agency. The process of regulatory capture can be two way: just as corporations can capture government regulation agencies, the possibility exists for government agencies to capture corporations. As a result of this process, government regulatory agencies can fail to exert financial and ethical boundaries if they are captured, while corporations can fail strategically and financially if they are captured.

Regulatory capture takes two forms, materialist and non-materialist capture. In materialist capture, which is primarily financially motivated, the mechanism of capture is to appeal to the self-interest of the regulators. Materialist capture alters the motives of regulators based on economic self-interest, so that they become aligned with the commercial or special interest groups which are supposed to be regulated. This form of capture can be the result of bribes, political donations, or a desire to maintain government funding. Non-materialist capture also called cognitive or cultural capture happens when the regulator adopts the thinking of the industry being regulated. Status and group identification both play a role in the phenomena of regulators identifying with those in the industry they are assigned to regulate[iii].

Given the current socio-political climate of accelerating technological and social change, consideration should be given to how organizations are formed. Organizations should be structured to resist or otherwise minimize any service disruption caused by regulatory capture, so that if the process of normative regulation fails i.e. in situations where the balance of the relationship between the two entities has become corrupted, the service which required regulation in the first place can remain available after the failure.

One example of potential government regulatory failure due to a captured agency is the Environmental Protection Agency (EPA) hydraulic fracking scandal of 2004. The EPA released a report[iv] in which they stated that hydraulic fracturing posed “little or no threat” to drinking water supplies. Whistle-blower Weston Wilson disputed[v] this conclusion of the EPA publicly and exposed five of the seven members of the peer review panel as having conflicts of interest. These conflicts of interest allowed elements within the administration to apply pressure, and become involved in discussions about how fracking would eventually be portrayed in the report. Due to this pressure the EPA may have unable to publish a genuine conclusion about the safety of fracking. This reveals a potential failure of the EPA to protect the public interest due to regulatory capture.

Another example of regulatory capture concerns a dramatic failure of regulatory oversight for the British National Foundation (BNF), which is one the UK’s most influential institutes on diet and health. The BNF, established more than 40 years ago, advises government, schools, industry, health professionals and the public, and exists solely to provide “authoritative, evidence-based information on food and nutrition”[vi]. Its ability to provide independent evidence-based advice however has been called into question given its apparent bias towards promoting the views of the food industry and the organization’s lack of transparency when reporting funding sources.

This comes as no surprise when 39 members of its funding membership come from the food industry[vii]. For example, In October 2009, when a television commercial for a member company’s probiotic yoghurt product was banned, the BNF spoke out in support of the product (and thus the company) by claiming that there is “growing evidence that a regular intake of probiotics may positively influence our health”. As a result while appearing to take a stance on the grounds of public health, it would appear as though the BNF were protecting its own interests and those of a member company under the guise of regulatory oversight.

Factors that affect human decision making within complex adaptive systems

The examples of regulatory capture described above highlight some of the issues associated with human cognitive bias, specifically within a complex adaptive system (such as a government or corporation) where rational choice is bounded by self-interest combined with overarching organizational goals. In information saturated environments such as these, human cognitive limitations can become a factor that leads to poor rational decision making, requiring the individual or organisation to rely on shortcuts which may lead to human error. A number of psychological and social factors such as “attention scarcity”, “information asymmetry”, and “accelerating societal complexity” contribute to poor rational decision making within complex organisational structures. Awareness has been rising that human attention has become a scarce resource in the information age, and attention scarcity ultimately relates to the economics of attention.

Attention scarcity relates to a human cognitive limitation which determines the amount of information a human can digest and attend to in a given period of time (also referred to as an information economy). Simply put, “attention is a resource-a person only has so much of it”[viii]. Thus, in a low information economy any item brought to the attention of decision makers is perceived by its economic properties which are deemed decisive for its profitability. In contrast, in a high information economy, the diversity of items mean perception is limited and only choices that expose decision makers to sufficiently strong signals are viable.

Attention scarcity is a weakness of human cognition which can be purposefully exploited. For example, consider the U.S. Affordable Care Act, which has over 9000 pages of rules. It is likely that most voters lacked sufficient “attention” to read through and digest each page at the time when the act was being debated. Due to the complexity of legislative law, even if a team of “netizens” formed to crowd source the reading and analysis of a new law, it is unlikely that they would be able to interpret and understand it within the available timeframe to object if needed.

The effects of attention scarcity are observed in the poor public understanding not only of legal documents, but also of complex open source software. We see in open source software situations where the developers allow anyone to read the source code but in which the source code has so many lines of opaque obfuscated code that very few users or even other software developers understand how it works. We can see how attention scarcity produces information asymmetry between the open source developers who can decipher the source code and everyone else who may or may not choose to use the software.

Information asymmetry is a serious factor intrinsic to cognitive bias in human decision making, and concerns decisions in transactions where one party has the perception of, or is in possession of, more or better quality information than the other. This potentially creates an imbalance in the transaction power dynamic which may lead to future failure and a collapse of trust, causing a kind of market failure in a worst case scenario.

Accelerating societal complexity refers to the structural and cultural aspects of our institutions whose practices are identified by the “shrinking of the present”, a decreasing time period during which expectations based on past experience reliably match the future[ix]. When combined with accelerating technological progress this “shrinking” appears to flow ever faster, making decisions based on belief or the perception of better information problematic.

All of these individual factors can influence the human decision making process; in combination they potentially create a decision space that becomes more fluid, with a self-reinforcing feedback loop which requires better decisions to be made in shorter spaces of time with incomplete or asymmetric information. Indeed, by all accounts humans make errors all of the time, but as society gets ever more complex, these errors have lasting and increasingly dangerous consequences (such as in the example of hydraulic fracking discussed above). In order to get a clearer picture of a possible basis for this error effect, some discussion of human cognitive limitations is warranted.

The impact of human cognitive limitation

As we have discussed previously, information asymmetry in complex adaptive systems allows for decision error to appear within the system, as the better informed parties possess a marked information advantage which allows them to exploit the ignorance of other parties. This can occur in any field of human endeavour, such as law, science, commerce or governance, where new knowledge will be easier to grasp by those with previous knowledge, given that knowledge is self-referential and compounds on itself[x]. As organizations grow larger and the decision requirements become ever more complex, attention scarcity and information asymmetry can form a feedback loop that – at scale – slows the rate of innovation/knowledge diffusion, as individuals and organisations vie for supremacy in transactions.

Research in the area of cognitive neuroscience suggests that the cognitive abilities of an individual are limited to five core systems (objects, agents, number, geometry and social) [xi], each with its own set of limitations. An example of limitation within the social system is “Dunbar’s number”, first proposed by British anthropologist Robin Dunbar[xii], who posited that the number of social group members a primate can track is limited to the volume of the neocortex, and while this theory is hotly disputed[xiii], it has yet to be disproven with any certainty. This limitation, if taken to its logical conclusion and scaled to match an average complex adaptive system (such as regulatory or corporate bodies) highlights that the decision making abilities of an average individual could be impaired significantly, when not augmented by technology or genetic engineering.

This impairment of decision making ability was remarked upon in Herbet A. Simon’s theories of bounded rationality[xiv]. These theories were concerned with rational behaviour in the context of individuals and organisations and individuals within organisations, which he stated were indistinguishable under the “theory of the firm”. In this theory the given goals and the given conditions (of the organization) drive “rational” decision making based on two functions: the demand function (the quantity demanded as a function of price) and the cost function (the cost of production as a function of the quantity produced). These two rules when applied to complex adaptive systems, such as regulatory or governing bodies, demonstrate the vast scope in which human cognitive bias can affect outcomes at the macro scale while appearing to be a series of micro decisions made by individuals.

Nowhere can this asymptotic synergy of information, human cognitive ability and bounded rationality be seen more clearly, than in the case of law. A truism often used in this context is that Ignorance of the law excuses no one, but the complexity of law confuses everyone. In a world where few if anyone in society knows the law it may well become necessary for people to supplement their own cognitive capacities with “apps” to protect themselves from the complexity of the law. “Lawfare” is said to describe a form of asymmetric warfare which allows for the exploitation of the esoteric and complex nature of the law to damage political opponents. Just as complex words on an ingredient list can be used to hide undesirable ingredients from customers, the law and its potential use as a weapon also remain hidden from most citizens.

The current analogue forms of government have their basis in a complicated combative bureaucracy (necessary to support representative forms of democracy). Accelerating technological progress, however, shows that this approach may not scale particularly well as society becomes orders of magnitude more complex in the coming decades. It is our analysis, that unless a Transhumanist approach is adopted to enhance the existing human decision processes by merging with technological decision support, catastrophic failures may occur.

In this socially complex future, it is likely that our politicians may have to rely increasingly on information technologies, to the point that they essentially become cyborgs, merging fact checking and recommendation engines – based on rational rulesets – to keep pace with accelerating societal change and allow them to fully encompass monolithic social structures. In addition, citizens may also out of necessity need to adopt similar technologies, in order to understand the decisions made by these new “enhanced” politicians and to adapt to and effectively participate in an increasingly complex and fast changing society. In addition the institutions of the future will likely have to adopt human error tolerant designs which use the latest decision support technology to help mitigate and dampen the consequences of human error.

The Cyborg Citizen: A transcendent solution?

In order to avoid confusion we first have to properly define what we mean by a cyborg citizen. Andy Clark[xv] in his book Natural-Born Cyborgs: Minds, Technologies, and the future of Human Intelligence, argues that human beings are by nature cyborgs, claiming that human neural plasticity and a propensity to build and utilise tools in everyday life (from handwriting to mobiles phones), produces a species that thinks and feels most effectively only through the use of its technologies. Ray Kurzweil[xvi] goes one step further to predict that, by 2030, most humans will choose to be cyborgs:

Our thinking then will be a hybrid of biological and non-biological thinking. We’re going to gradually merge and enhance ourselves. In my view, that’s the nature of being human – we transcend our limitations.

In order to understand what a “cyborg citizen” means in today’s information and technology driven society, we must expand upon this definition to include current technological and social developments. Indeed, we will have to recognize that each individual today, and more so in the future, will have a digital, virtual, and physical self[xvii]. Thus, a cyborg is a person who is (singly or in combination) enhanced by or dependent upon, robotic, electronic, or mechanical devices such as artificial hearts, pacemakers, portable dialysis machines or even mobile / cloud computing which employs storage, search, retrieval and analysis (SSRA) capabilities such as Google, Amazon etc.

Corporations also appear to be taking advantage of technologies to enhance human decision making as a way to adapt to increasing business and market complexity. Venture capitalist firm Deep Knowledge Ventures named to their board of directors[xviii] an algorithm called VITAL, which they intend to someday evolve into a full-fledged artificial intelligence. This move may represent one of the initial forays in what may become a trend toward human-machine run corporations. Indeed, some are going much further, to call for complete replacement of humans within complex organisations (such as government) with artificial intelligence[xix]. However, arguments about the inevitable rise of artificial general intelligence aside, we push for a “human-in-the-loop” approach through the merger or bonding of human ethical and moral “instinct” with a bounded rational decision support engine, existing in either digital space or embedded into the human central nervous system via implants.

So what would such a citizen cyborg look like? Below is a list of a number of hypothetical decision support systems which are presently borderline (in that they exist, but are not as yet fit for purpose), which could exist in digital space and employ SSRA capabilities to allow for enhanced human-machine hybrid decision making.

  • Intent casting: Intent casting, originally described by Doc Searls, allows consumers to directly express their wants and needs to the market. This could allow for the digitization of intent and for agent-based AI to shop on behalf of customers.
  • Algorithmic democracy: Algorithmic democracy in theory, would allow voters to delegate their voting decisions (and thus agency) to an algorithm, which could be referred to as a digital voting agent (DA). Examples of digital agents today include Siri, Amazon Echo, and Cortana. As these DA’s become more capable, it is possible that voters could rely on their DA to inform them as to how they should vote in accordance to their specific interests and preferences.
  • Digital decision support consultants: These are intelligent decision support systems that would help professionals make better decisions. It is likely that there will be apps for different professions such as IBM’s WellPoint for doctors, legal assistant apps, and real-time fact checkers[xx] These apps may be decentralized collaborative applications with human and robot participation or they may be software agent based AI. This category would also include algorithms such as Deep Knowledge Ventures VITAL and agents to track relationships and the flow of information between groups within a complex organization or brokers between two transaction parties.

Examples of algorithms that hypothetically speaking, could run on physiologically embedded technology, directly accessible by the human brain to provide decision support:

  • Generate and test search: a reinforcement learning, trial and error algorithm which can search through a limited solution space in a systematic manner to find the best solution[xxi]. In operation this algorithm would generate possible solutions to a set problem and test each until it finds the solution which passes a positive threshold, whereupon the solution is relayed to the human cognitive process for a potential decision and reinforcement. This kind of technique can be used to take advantage of simulation testing and solve problems which have a limited solution space, such as those presented by the “free market” or those requiring a quick human decision in a “lesser of two evils” scenario.
  • Global optimization search: evolutionary algorithms which are inspired by the biological mechanisms of global optimization search, such as mutation, crossover, natural selection and survival of the fittest[xxii]. These algorithms can search a solution space and compare each solution to a desired fitness criteria. In the case where human input is necessary to evolve a solution then an interactive evolutionary algorithm could allow the human to be the solution selector, while the algorithm is the solution generator. The algorithms can go through a similar process and be generated and evolved for improved fitness.
  • Markov decision processes: an experimental framework for decision making and decision support. A Markov decision process automates finding the optimal decision for each state while taking into account each action’s value in comparison to the others, essentially an idealised decision output for a given problem state. With human decision selection driving the process, the ramifications of each decision selection at each stage of the problem analysis can be carefully considered and accepted or rejected based on rational choice.

This list is by no means exhaustive and there may be other borderline hypothetical decision support systems and algorithms which are not mentioned here. However, this list gives a general idea of how embedded or digital artificial decision support agents can improve decision quality in certain sectors of human society. By improving decision quality through technology and semi-autonomous agencies we may be able to reduce the frequency of poor decisions which result from nothing more than human error and or human ignorance.

Discussion: Checks and Balances

We do not propose that cyborgization makes for a perfect solution to the problem of human cognitive limitation and decision error in complex social systems. Indeed, decision support systems already exist in one form or another. However, they are still in an early stage of development and not ubiquitous, thus technology such as VITAL benefits only large corporations and perhaps the intelligence establishment. It is a situation similar to the early stages of computer development or the Internet, both of which existed, but the benefits were limited to certain domains, back in the 1960s during the Cold War.

We believe the widespread adoption of decision support technology, be it embedded or digital, could provide the tools necessary for individuals to comprehend the entirety of complex organizations, model the decision-consequence space and select ethical decisions. These tools would essentially enable decision makers to take into account individual need and motivation, and provide ethical solutions which afford the greatest good for the greatest number, without creating asymmetric information economies.

An example of a beneficial application of cyborg technology would be the doctor who utilises WellPoint[xxiii] to make diagnoses based on a combination of learned skillset and a digital health agent with a broad specialist evidence based knowledge base. Alternatively, in a quantified-self context an individual could upload health data gathered from wearable sensor technology, and receive information of potential health issues which could be treated with alacrity in their early stages by doctors able to access this information and review treatment options.

However, such technology and its application would not come without limitation or risk. The widespread use of these technologies could lead to a form of information “cold war”, in which human and machine agents (singly or in combination) attempt to create a state of “perfect information” to gain a competitive advantage. They may seek a form of perfect regulatory capture where one party seeks always to have an advantageous position in any transaction, be it in the free market or in the policy, legislative or intelligence domains. Arguably, such an information cold war already exists between various governments, intelligence services and corporate entities and while the “battle ground” as it were, is in so called cyberspace, it is primarily an analogue concern where agency is biological i.e. human as opposed to A.I.

It is a sad reflection upon humanity that one “positive” aspect of this cold war scenario, is that competition (war) leads to innovation, as opposing sides race to gain the information advantage. This impetus this would accelerate the development of the technologies required to create a “true” cybernetic individual or generally intelligent artificial agent. It is a matter for debate whether this would result in a situation that would be to the benefit of humanity in general or lead to a totalitarian dystopia; in which one entity or organisation exists in a near perfect state of “knowing”, stifling the development of both technology and society.

It is our opinion that the potential benefits of cyborgization outweigh the potential risks. As our technological systems and culture grow ever more complex, we must consider the risk of human error, of bad decisions, of ignorance combined with advanced technologies, in the light of a technology so pregnant with possibility.

We realize cyborgization is a controversial subject, however we see it an unavoidable and unstoppable trend. Indeed, Ginni Rometty (Chairman and CEO of IBM) stated recently that:

In the future, every decision that mankind makes is going to be informed by a cognitive system like Watson, and our lives will be better for it[xxiv]

This is a statement is very much in accordance with our notion of keeping the human-in-the-loop during decision making. Furthermore, an argument could be made that given the current reliance by vast numbers of the world population on mobile phones and internet search engines, rather than becoming cyborgs at some specific point in time (as in the prediction of Kurzweil), we have always been cyborgs (as per Clarke’s argument) and it is merely a matter of time and technology, until the line between what is human and what is our technology becomes non-existent.


Just as search engines allow for human beings to find the relevant information meeting their “criteria”, the adoption of decision support engines could allow autonomous digital agents and human-machine hybrids alike to find the most ethical decision within a given consequence-decision space. This approach would allow for “what if” hypothesis testing[xxv] of many decision types such as policy determination, legislative impact, market transactions and global consequence. The dawn of ethical computing is fast approaching and it is in this area requiring our fullest attention. Transhumanism provides a socially progressive framework that if adopted can allow us to transcend our human cognitive limitations, so that we can become more effective and ethical decision makers. We believe that developing the technology which can facilitate our arrival at the cyborg stage of human leadership should be a top priority, especially in this time of accelerating developments in Artificial Intelligence, which if left unsupervised could surpass us to become the apex decision maker for our entire species.


[i] Stigler, G. (1971), “The Theory of Economic Regulation.”, Bell Journal of Economics and Management Science, 2, 3–21

[ii] Peltzman, S. (1976), “Toward a More General Theory of Regulation.”, Journal of Law and Economics, 19 , 211–48.

[iii] Carpenter, D., & Moss, D. A. (Eds.). (2013). “Preventing regulatory capture: special interest influence and how to limit it.” Cambridge University Press.

[iv] Environmental Protection Agency, “Study of Potential Impacts of Hydraulic Fracturing of Coalbed Methane Wells on Underground Sources of Drinking Water.” Office of Groundwater and Drinking Water report, June 2004 – accessed May 2015.


[vi] Chamberlain & Laurance (2010). “Is the British Nutrition Foundation having its cake and eating it too?” – accessed May 2015.

[vii] Chamberlain & Laurance (2010). “Is the British Nutrition Foundation having its cake and eating it too?” – accessed May 2015.

[viii]Crawford, Matthew B. (March 31, 2015). “Introduction, Attention as a Cultural Problem”. The World Beyond Your Head: On Becoming an Individual in an Age of Distraction (hardcover) (1st ed.). Farrar, Straus and Giroux. p. 11.

[ix] Rosa, H.: “Social Acceleration: A New Theory of Modernity.” Columbia University Press, New York (2013)

[x] Klein, S. B., & Kihlstrom, J. F. (1986). “Elaboration, organization, and the self-reference effect in memory.” Journal of Experimental Psychology: General, 115(1), 26-38. doi:10.1037/0096-3445.115.1.26

[xi] Kinzler KD, Spelke ES. Core systems in human cognition. Progress in Brain Research. 2007;164:257–264

[xii] Dunbar, R. I. M. (1992). “Neocortex size as a constraint on group size in primates”. Journal of Human Evolution 22 (6): 469–493. doi:10.1016/0047-2484(92)90081-J

[xiii] Wellman, B. (2012). “Is Dunbar’s number up?” British Journal of Psychology 103 (2): 174–176; discussion 176–2. doi:10.1111/j.2044-8295.2011.02075.x

[xiv] Simon, H.A. (1972). Theories of bounded rationality. In C.B. McGuire and R. Radner (Eds.), Decision and organization: A volume in honor of Jacob Marschak (Chap. 8). Amsterdam: North-Holland

[xv] Andy, Clark. (2004) “Natural-Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence.”, Oxford; Oxford University Press.

[xvi] Guia Del Prado “Google Futurist Ray Kurweil thinks we’ll all be cyborgs by 2030” – accessed june-2015

[xvii] The digital and virtual while similar are distinct in their differences. To make clear the distinction, something is virtual if it will only exist contained within a virtual world while if something is digital it is known to exist in the physical world just in digitized form. The distinction is between digital and virtual space in which digital space is a subset of what people consider to be part of the physical world while virtual space isn’t directly referring to a part of the physical world

[xviii]Wile, R. (2014, May 13). “A Venture Capital Firm Just Named An Algorithm To Its Board Of Directors – Here’s What It Actually Does.” Retrieved June 5, 2015, from


[xx] Ciampaglia GL, Shiralkar P, Rocha LM, Bollen J, Menczer F, Flammini A (2015) Computational Fact Checking from Knowledge Networks. PLoS ONE 10(6): e0128193. doi:10.1371/journal.pone.0128193

[xxi] Kaelbling, L. P., Littman, M. L.,.and Moore, A. W., (1996) “Reinforcement Learning: A Survey.”, Journal of Artificial Intelligence Research, Volume 4, pages 237-285

[xxii] Weise, T. “Global Optimization Algorithms – Theory and Application.” Germany: (self-published), 2009. [Online]. Available: -accessed 06-2015



[xxv] Winfield, A. F., Blum, C., & Liu, W. (2014). “Towards an ethical robot: internal models, consequences and ethical action selection.” In Advances in Autonomous Robotics Systems (pp. 85-96). Springer International Publishing


The article above features as Chapter 5 of the Transpolitica book “Envisioning Politics 2.0”.

The image is an original design by Alexander Karran.