Goals and outcomes

Analysing goals and potential outcomes

Once projects are started, they can take on a life of their own.

It’s similar to the course taken by the monster created by Dr Frankenstein in Mary Shelley’s ground-breaking novel. A project – especially one with high prestige – can acquire an intrinsic momentum that will carry it forward regardless of obstacles encountered along the way. The project proceeds because people involved in the project:

  • Tell themselves that there’s already a commitment to complete the project
  • View themselves as being in a winner-takes-all race with competitors
  • Feel constrained by a sense of loyalty to the project
  • Perceive an obligation to fellow team members, or to bosses, or to others who are assumed to be waiting for the product they are developing
  • Fear that their pay will be reduced, and their careers will stall, unless the project is completed
  • Desire to ship their product to the world, to show their capabilities.

But the result of this inertia could be outcomes that are, later, bitterly regretted:

  • The project produces results significantly different to those initially envisioned
  • The project has woeful unexpected side-effects
  • Even if it is successful, the project may consume huge amounts of resources that would have been better deployed on other activities.

Accordingly, there’s an imperative to look before you leap – to analyse ahead of time the goals and potential outcomes we can expect from any particular project. And once such a project is underway, that analysis needs to be repeated on a regular basis, taking into account any new findings that have arisen in the meantime. That’s as opposed to applying more funding and other resources regardless.

The bigger the potential leap, the greater the need to look carefully, beforehand, at where the leap might land.

The Singularity Principles address projects that seek to develop or deploy new technology that might, metaphorically, leap over vast chasms. The first six of these principles act together to improve our “look ahead” capability:

  • Question desirability
  • Clarify externalities
  • Require peer reviews
  • Involve multiple perspectives
  • Analyse the whole system
  • Anticipate fat tails

Read on for the details.

Question desirability

The principle of “Question desirability” starts with the recognition that, just because we believe we could develop some technology, and even if we feel some desires to develop that technology, that’s not a sufficient reason for us actually to go ahead and develop it and deploy it.

Therefore, the principle urges that we take the time, at the start of the project, to write down what we assume are the good outcomes we will obtain from the technology to be developed. Once these assumptions have been written down, it allows for a more thoughtful and considered review.

The principle also urges that we consider more than one method for achieving these intended outcomes. We should avoid narrowing our choice, too quickly, to a particular technology that has somehow caught our fancy.

This separation in analysis of desired outcomes, sometimes known as “requirements”, from possible solutions, is a vital step to avoiding unintended consequences of technologies:

  • Requirements: The outcomes we desire to obtain, as a result of this project, or possibly from other, different, projects
  • Solutions: Potential methods of meeting our requirements – though, if we’re not careful, we can become preoccupied with achieving a particular solution, and lose sight of key aspects of the underlying requirements.

For example, a requirement could be “reduce the likelihood of extreme weather events”. One possible solution is “accelerate the removal of greenhouse gases that have built up in the atmosphere”. But a preoccupation with that solution might lead to experimentation with risky geo-engineering projects, and to a failure to investigate other methods to avoid extreme weather events.

Again, a requirement could be “reduce the threats posed by the spread of weapons of mass destruction”. One possible solution is “accelerate the introduction of global surveillance systems”. But a preoccupation with that solution can have its own drawbacks too.

Once we have documented our requirements, it can make it easier to find better, safer, more reliable ways of achieving the outcomes that we have in mind.

The principle of “Question desirability” also recommends that we should in any case challenge assumptions about which outcomes are desirable, and we should be ready to update these assumptions in the light of improved understanding.

Indeed, we should avoid taking for granted that agreement exists on what will count as a good outcome.

That takes us to the next principle, “Clarify externalities”.

Clarify externalities

Recall that an externality is an effect of a project, or the effect of an economic transaction, that is wider than the people directly involved.

Examples of negative externalities include noises, smells, pollution, resource depletion, cultural chaos, and a general loss of resilience. Examples of positive externalities include:

  • People learning skills as a result of interacting with each other
  • A reduction in the likelihood of non-vaccinated people catching an infection (because the prevalence of the infection in the population is reduced by the people who are vaccinated)
  • The free distribution of second-hand books and magazines.

The principle of “Clarify externalities” draws attention to possible wider impacts (both positive and negative) from the use of products and methods, beyond those initially deemed central to their operation. The principle seeks to ensure that these externalities are included in cost/benefit calculations.

Therefore we should not just consider metrics such as profit margin, efficiency, time-to-market, and disabling competitors. We need to consider broader measures of human flourishing.

What makes this analysis possible is the effort taken, in line with the “Question desirability” principle, to write down the intended outcomes of the technology to be developed. What makes this analysis more valuable are the principles of “Require peer reviews” and “Involve multiple perspectives” to which we turn next.

Require peer reviews

The alternative to requiring peer reviews is that we trust the people who are behind a particular project. We may feel they have a good track record in creating technologies and products. Or that they have outstanding talent. In that case, we might feel a peer review would be a waste of time.

That may be acceptable for projects that are sufficiently similar to those undertaken in the past. However, new technologies have a habit of bringing surprises, especially when used in novel combinations.

That’s why independent peer reviews should be required, involving external analysts who are not connected with the initial project team. These analysts should ask hard questions about the assumptions made by the project team.

The value of these peer reviews depends on:

  • The extent to which reviews are indeed independent, rather than being part of some cosy network of “I’ll scratch your back – give your project a favourable review – if you scratch mine”
  • The extent to which reviewers have up-to-date relevant understanding of the kinds of things that could go wrong with particular projects.

In turn, this depends upon society as a whole placing sufficient priority on supporting high quality peer reviews.

Involve multiple perspectives

The peer review phase, into the proposed goals and likely outcomes of a project, should involve people with multiple different skill sets and backgrounds (ethnicities, life histories, etc).

These reviewers should include not just designers, scientists, and engineers, but also people with expertise in law, economics, and human factors.

A preoccupation with a single discipline or a single perspective could result in the project review overlooking important risks or opportunities.

To be clear, these independent analysts won’t necessarily have a veto over decisions taken by the project team. However, what is required is that the project team, along with their sponsors, take proper account of questions and concerns raised by independent analysts.

That proper account should observe two further principles: “Analyse the whole system” and “Anticipate fat tails”.

Analyse the whole system

What’s meant by the “whole system” is the full set of things that are connected to the technology that could be developed and deployed – upstream influences, downstream magnifiers, and processes that run in parallel. It also includes human expectations, human beliefs, and human institutions. It includes aspects of the natural environment that might interact with the technology. And, critically, it includes other technological innovations.

When analysing the potential upsides and downsides of using the new technology that we have in our mind, we need to consider possible parallel changes in that wider “whole system”.

Some examples:

  • Rather than just forecasting that a new intervention in a biological ecosystem might reduce the presence of some predator species with unpleasant characteristics, we need to consider whether a reduction of that population would trigger a sudden rise in the population of another species, preyed on by the first, with knock-on consequences for the flora consumed by the second species, and so on
  • Rather than extrapolating the level of public interest in a forthcoming new technology from what appears to be only a modest interest at the present time, we should consider the ways in which public interest might significantly change – potentially even causing a panic or stampede – once there are visible examples of the technology changing people’s lives
  • Rather than simply analysing how a piece of new artificial intelligence might behave in the environment as it exists today, we should consider possible complications if other pieces of new artificial intelligence, including adversarial technology, or novel forms of hacking, are introduced into the environment as well.

This kind of analysis might lead to the conclusion that a piece of new technology would, after all, be more dangerous to deploy than was first imagined. Or it could lead to us changing aspects of the design of the new technology, so that it would remain beneficial even if these other alterations in the environment took place.

Anticipate fat tails

The principle of “Anticipate fat tails” urges us to remember that not every statistical distribution follows that of the famous Normal curve, also known as the Gaussian bell curve.

For Normal distributions, once we observe the mean of a set of observations, often denoted by the Greek letter mu (μ), and also the standard deviation of these observations, known as sigma (σ), we can be confident that new measurements more than three standard deviations away from the mean will be unlikely. They’ll be seen only around three times in a thousand. And for a new measurement that is more than six standard deviations away from the mean, you would have to wait on average more than one million years, if a new measurement was made every single day.

However, our initial observations of the data might lead us astray. The preconditions for the distribution of results being Normal might not apply. These preconditions require that the outcomes are formed from a large number of individual influences which are independent of each other. When, instead, there are connections between these individual influences, the distribution can change to have what are known as “fat tails”. In such cases, outcomes can arise more often that are at least six sigma away from the previously observed mean – or even twenty sigma away from it – taking everyone by a horrible surprise.

That possibility would change the analysis from “how might we cope with significant harm”, such as a result three sigma away from the mean, to “could we cope with total ruin”, such as a result that is, say, twenty sigma distant.

In practical terms, this means our plans for the future should beware the creation of monocultures that lack sufficient diversity – cultures in which all the variations can move in the same direction at once.

We should also beware the influence of hidden connections, such as the shadow links between multiple different financial institutions that precipitated the shock financial collapse in 2008.

For example, consider a cry of exasperation in August 2007 from David Viniar, the Chief Financial Officer for Goldman Sachs. Viniar was offering his explanation for a dismal reversal of fortune in the bank’s Global Alpha investment fund. This was no ordinary fund: it used what was described as “sophisticated computer models” to identify very small differences in market prices, and to buy or sell securities as a result. The fund had stellar financial results for a number of years, before experiencing a major setback as the global financial crash gathered pace. Viniar’s shocked comment: “We were seeing things that were 25-standard deviation moves, several days in a row”.

Viniar was by no means alone, as a banking executive, at being caught out by the scale of deviations which occurred in the prices of key financial instruments in 2007. John Taylor of Stanford and John Williams of the Federal Reserve Bank of San Francisco have calculated some stunning “before and after” statistics for the so-called “spread” between the overnight interbank lending rate and the London interbank offer rates (Libor). The baseline statistics covered the period from December 2001 to July 2007, that is, the period before the financial crisis. However, the spread on 9th August 2007 exceeded the previous mean by seven standard deviations of the baseline statistics. By 20th March 2008, the spread exceeded the previous mean by sixteen standard deviations.

The takeaway: the mere fact that performance trends seem to be well behaved for a number of years provides no guarantee against sharp ruinous turns of fortune.

Indeed, whenever there are reasons to foresee fat tail outcomes, it means we need to rethink our plans for the new technology. Otherwise, the world might experience a shock outcome from which there is no prospect of any recovery – perhaps for generations, perhaps indefinitely.

Next, let’s review the principles covering the characteristics that are highly desired in technological solutions.

Recent Posts

RAFT 2035 – a new initiative for a new decade

The need for a better politics is more pressing than ever.

Since its formation, Transpolitica has run a number of different projects aimed at building momentum behind a technoprogressive vision for a better politics. For a new decade, it’s time to take a different approach, to build on previous initiatives.

The planned new vehicle has the name “RAFT 2035”.

RAFT is an acronym:

  • Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

RAFT is also a metaphor. Here’s a copy of the explanation:

When turbulent waters are bearing down fast, it’s very helpful to have a sturdy raft at hand.

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities – enormous opportunities and enormous risks:…

Rapid technological change tends to provoke a turbulent social reaction. Old certainties fade. New winners arrive on the scene, flaunting their power, and upturning previous networks of relationships. Within the general public, a sense of alienation and disruption mingles with a sense of profound possibility. Fear and hope jostle each other. Whilst some social metrics indicate major progress, others indicate major setbacks. The claim “You’ve never had it so good” coexists with the counterclaim “It’s going to be worse than ever”. To add to the bewilderment, there seems to be lots of evidence confirming both views.

The greater the pace of change, the more intense the dislocation. Due to the increased scale, speed, and global nature of the ongoing NBIC revolutions, the disruptions that followed in the wake of previous industrial revolutions – seismic though they were – are likely to be dwarfed in comparison to what lies ahead.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In short, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

The words just quoted come from the opening page of a short book that is envisioned to be published in January 2020. The chapters of this book are reworked versions of the scripts used in the recent “Technoprogressive roadmap” series of videos.

Over the next couple of weeks, all the chapters of this proposed book will be made available for review and comment:

  • As pages on the Transpolitica website, starting here
  • As shared Google documents, starting here, where comments and suggestions are welcome.

RAFT Cover 21

All being well, RAFT 2035 will also become a conference, held sometime around the middle of 2020.

You may note that, in that way that RAFT 2035 is presented to the world,

  • The word “transhumanist” has moved into the background – since that word tends to provoke many hostile reactions
  • The word “technoprogressive” also takes a backseat – since, again, that word has negative connotations in at least some circles.

If you like the basic idea of what’s being proposed, here’s how you can help:

  • Read some of the content that is already available, and provide comments
    • If you notice something that seems mistaken, or difficult to understand
    • If you think there is a gap that should be addressed
    • If you think there’s a better way to express something.

Thanks in anticipation!

  1. A reliability index for politicians? 2 Replies
  2. Technoprogressive Roadmap conf call Leave a reply
  3. Transpolitica and the TPUK Leave a reply
  4. There’s more to democracy than voting Leave a reply
  5. Superdemocracy: issues and opportunities Leave a reply
  6. New complete book awaiting reader reviews Leave a reply
  7. Q4 update: Progress towards “Sustainable superabundance” Leave a reply
  8. Q3 sprint: launch the Abundance Manifesto Leave a reply
  9. Q2 sprint: Political responses to technological unemployment Leave a reply