Responsible development

Ensuring development takes place responsibly

Five of the Singularity Principles cover methods to ensure that development takes place responsibly:

  • Insist on accountability
  • Penalise disinformation
  • Design for cooperation
  • Analyse via simulations
  • Maintain human oversight

Insist on accountability

The principle of “Insist on accountability” aims to deter developers from knowingly or recklessly cutting key corners in the way they construct and utilise technology solutions.

A lack of accountability often shows up in one-sided licence terms that accompany software or other technology. These terms avoid any acceptance of responsibility when errors occur and damage arises. If something goes wrong with the technology, these developers effectively shrug their shoulders regarding the mishap. That kind of avoidance needs to stop.

Instead, legal measures should be put in place that incentivise paying attention to, and adopting, methods that are most likely to result in safe, reliable, effective technological solutions.

As always with legal incentives, the effectiveness of these measures will require:

  • Regular reviews to check that no workarounds are being used, that allow developers to conform to the letter of the law whilst violating its spirit
  • High-calibre people who are well-informed and up-to-date, working on the definition and monitoring of these incentives
  • Society providing support to people in these roles of oversight and enforcement, via paying appropriate salaries, providing sufficient training, and protecting legal agents against any vindictive counter suits.

Penalise disinformation

As a special case of insisting on accountability, the principle of “Penalise disinformation” insists that penalties should be applied when people knowingly or recklessly spread wrong information about technological solutions.

Communications that distort or misrepresent features of a product or method should result in sanctions, proportionate to the degree of damage that could ensue.

An example would be if a company notices problems with its products, as a result of an audit, but fails to disclose this information, and instead insists that there is no issue that needs further investigation.

Again, this will require high-calibre people who are well-informed and up-to-date, working on the definition and monitoring of what counts as disinformation. The payment and training of such people is likely to need to be covered from public funds.

Design for cooperation

Another initiative that is likely to need public coordination, rather than arising spontaneously from marketplace interaction, is a strong preference for collaboration on matters of safety. That’s in contrast to a headlong competitive rush to release products as quickly as possible, in which short-cuts are taken on quality.

Hence the principle of “Design for cooperation”.

For example, public policy could give preferential terms to solutions that share algorithms as open source, without any restriction on other companies using the same algorithms. Related, a careful reconsideration is overdue of the costs and benefits of intellectual property rules

Public funding and resources should also be provided to support the definition and evolution of open standards, enabling the spirit of “collaborate before competing”.

To be clear, the definition and timely evolution of open standards is a hard task. It will (once again) require high-calibre people, who are well-informed and up-to-date, working on the definition and evolution of these standards.

In turn, this is likely to require public subsidy, to ensure that it happens in an effective manner that can win the respect and trust of the companies whose solutions will be impacted.

Analyse via simulations

One factor that has always helped to design and produce new technology is previous technology, including tools and components.

This includes test environments, in which new technology can be put under stress in a variety of circumstances, before being released for wider deployment.

Designing and using test environments in an efficient, effective way is a major engineering discipline in its own right. There’s little point in repeating the same test again and again with little variation. That would consume resources and delay product release with little additional benefit. Testing is, therefore, a creative activity. On the other hand, the more that test processes can be automated, the easier it can be to ensure they are completed in a comprehensive manner.

One new factor in recent times is the ability for technology to be tested in virtual environments, that is in simulations. The principle of “Analyse via simulations” urges that attention be given to simulations in which products and methods can be analysed in advance of real-world deployment, with a view to uncovering potential surprise developments that may arise in stress conditions.

Inevitably, each simulation environment is likely to have its own limitations and drawbacks. They won’t fully anticipate all the eventualities that may occur in real world situations. However, over time, these simulations can improve, becoming more and more useful, and more and more reliable.

Creating and maintaining best-in-class simulations is likely to require (once again) the support of public funding and resources.

Maintain human oversight

Discussion of the role of simulated environments brings us to the final principle in this section – the principle of “Maintain human oversight”.

An increasing role in the development of new technology is being played by automated systems operated via artificial intelligence. These systems assist with the specification, design, implementation, verification, testing, monitoring, and analysis of new technology. They can make the overall process faster and more reliable.

However, although recommendations for next steps in developing products and methods will increasingly originate from software or AI, control needs to remain in human hands. We must ensure that such proposals arising from automated systems are reviewed and approved by an appropriate team of human overseers.

That’s because our AI systems are, for the time being, inevitably limited in their general understanding.

It’s also the case that any one human is limited in their general understanding. That’s why the principles of “Require peer reviews” and “Involve multiple perspectives”, covered earlier, come into play. Just as we should avoid putting too much trust into any one AI system, we should avoid putting too much trust into any one human reviewer.

To extend this point: Rather than relying on the analysis of a single AI review system, we should look for ways to have multiple different independent AIs review the recommendations for product development. But in all cases, the final decisions in any contentious or serious matter should pass through human oversight.

This also means that we humans need to regularly practice making independent decisions, without becoming overly dependent on AI tools that might, in unexpected circumstances, mislead us or let us down. Again, simulated virtual environments can provide useful practice situations. A group of humans can take roles in a collaborative “game play”, that features the zigs and zags of technology development and deployment in a simulation of competitive, fast-changing circumstances. At the conclusion of the exercise, the participants should conduct a retrospective:

  • What did they learn?
  • What would they do differently on another occasion?
  • What are the limits – and the strengths – of the simulated exercise?

Finally, in this in-depth review of the Singularity Principles, let’s move on to the principles covering the evolution and enforcement of the principles themselves.

Recent Posts

RAFT 2035 – a new initiative for a new decade

The need for a better politics is more pressing than ever.

Since its formation, Transpolitica has run a number of different projects aimed at building momentum behind a technoprogressive vision for a better politics. For a new decade, it’s time to take a different approach, to build on previous initiatives.

The planned new vehicle has the name “RAFT 2035”.

RAFT is an acronym:

  • Roadmap (‘R’) – not just a lofty aspiration, but specific steps and interim targets
  • towards Abundance (‘A’) for all – beyond a world of scarcity and conflict
  • enabling Flourishing (‘F’) as never before – with life containing not just possessions, but enriched experiences, creativity, and meaning
  • via Transcendence (‘T’) – since we won’t be able to make progress by staying as we are.

RAFT is also a metaphor. Here’s a copy of the explanation:

When turbulent waters are bearing down fast, it’s very helpful to have a sturdy raft at hand.

The fifteen years from 2020 to 2035 could be the most turbulent of human history. Revolutions are gathering pace in four overlapping fields of technology: nanotech, biotech, infotech, and cognotech, or NBIC for short. In combination, these NBIC revolutions offer enormous new possibilities – enormous opportunities and enormous risks:…

Rapid technological change tends to provoke a turbulent social reaction. Old certainties fade. New winners arrive on the scene, flaunting their power, and upturning previous networks of relationships. Within the general public, a sense of alienation and disruption mingles with a sense of profound possibility. Fear and hope jostle each other. Whilst some social metrics indicate major progress, others indicate major setbacks. The claim “You’ve never had it so good” coexists with the counterclaim “It’s going to be worse than ever”. To add to the bewilderment, there seems to be lots of evidence confirming both views.

The greater the pace of change, the more intense the dislocation. Due to the increased scale, speed, and global nature of the ongoing NBIC revolutions, the disruptions that followed in the wake of previous industrial revolutions – seismic though they were – are likely to be dwarfed in comparison to what lies ahead.

Turbulent times require a space for shelter and reflection, clear navigational vision despite the mists of uncertainty, and a powerful engine for us to pursue our own direction, rather than just being carried along by forces outside our control. In short, turbulent times require a powerful “raft” – a roadmap to a future in which the extraordinary powers latent in NBIC technologies are used to raise humanity to new levels of flourishing, rather than driving us over some dreadful precipice.

The words just quoted come from the opening page of a short book that is envisioned to be published in January 2020. The chapters of this book are reworked versions of the scripts used in the recent “Technoprogressive roadmap” series of videos.

Over the next couple of weeks, all the chapters of this proposed book will be made available for review and comment:

  • As pages on the Transpolitica website, starting here
  • As shared Google documents, starting here, where comments and suggestions are welcome.

RAFT Cover 21

All being well, RAFT 2035 will also become a conference, held sometime around the middle of 2020.

You may note that, in that way that RAFT 2035 is presented to the world,

  • The word “transhumanist” has moved into the background – since that word tends to provoke many hostile reactions
  • The word “technoprogressive” also takes a backseat – since, again, that word has negative connotations in at least some circles.

If you like the basic idea of what’s being proposed, here’s how you can help:

  • Read some of the content that is already available, and provide comments
    • If you notice something that seems mistaken, or difficult to understand
    • If you think there is a gap that should be addressed
    • If you think there’s a better way to express something.

Thanks in anticipation!

  1. A reliability index for politicians? 2 Replies
  2. Technoprogressive Roadmap conf call Leave a reply
  3. Transpolitica and the TPUK Leave a reply
  4. There’s more to democracy than voting Leave a reply
  5. Superdemocracy: issues and opportunities Leave a reply
  6. New complete book awaiting reader reviews Leave a reply
  7. Q4 update: Progress towards “Sustainable superabundance” Leave a reply
  8. Q3 sprint: launch the Abundance Manifesto Leave a reply
  9. Q2 sprint: Political responses to technological unemployment Leave a reply