Ensuring development takes place responsibly
Five of the Singularity Principles cover methods to ensure that development takes place responsibly:
- Insist on accountability
- Penalise disinformation
- Design for cooperation
- Analyse via simulations
- Maintain human oversight
Insist on accountability
The principle of “Insist on accountability” aims to deter developers from knowingly or recklessly cutting key corners in the way they construct and utilise technology solutions.
A lack of accountability often shows up in one-sided licence terms that accompany software or other technology. These terms avoid any acceptance of responsibility when errors occur and damage arises. If something goes wrong with the technology, these developers effectively shrug their shoulders regarding the mishap. That kind of avoidance needs to stop.
Instead, legal measures should be put in place that incentivise paying attention to, and adopting, methods that are most likely to result in safe, reliable, effective technological solutions.
As always with legal incentives, the effectiveness of these measures will require:
- Regular reviews to check that no workarounds are being used, that allow developers to conform to the letter of the law whilst violating its spirit
- High-calibre people who are well-informed and up-to-date, working on the definition and monitoring of these incentives
- Society providing support to people in these roles of oversight and enforcement, via paying appropriate salaries, providing sufficient training, and protecting legal agents against any vindictive counter suits.
Penalise disinformation
As a special case of insisting on accountability, the principle of “Penalise disinformation” insists that penalties should be applied when people knowingly or recklessly spread wrong information about technological solutions.
Communications that distort or misrepresent features of a product or method should result in sanctions, proportionate to the degree of damage that could ensue.
An example would be if a company notices problems with its products, as a result of an audit, but fails to disclose this information, and instead insists that there is no issue that needs further investigation.
Again, this will require high-calibre people who are well-informed and up-to-date, working on the definition and monitoring of what counts as disinformation. The payment and training of such people is likely to need to be covered from public funds.
Design for cooperation
Another initiative that is likely to need public coordination, rather than arising spontaneously from marketplace interaction, is a strong preference for collaboration on matters of safety. That’s in contrast to a headlong competitive rush to release products as quickly as possible, in which short-cuts are taken on quality.
Hence the principle of “Design for cooperation”.
For example, public policy could give preferential terms to solutions that share algorithms as open source, without any restriction on other companies using the same algorithms. Related, a careful reconsideration is overdue of the costs and benefits of intellectual property rules
Public funding and resources should also be provided to support the definition and evolution of open standards, enabling the spirit of “collaborate before competing”.
To be clear, the definition and timely evolution of open standards is a hard task. It will (once again) require high-calibre people, who are well-informed and up-to-date, working on the definition and evolution of these standards.
In turn, this is likely to require public subsidy, to ensure that it happens in an effective manner that can win the respect and trust of the companies whose solutions will be impacted.
Analyse via simulations
One factor that has always helped to design and produce new technology is previous technology, including tools and components.
This includes test environments, in which new technology can be put under stress in a variety of circumstances, before being released for wider deployment.
Designing and using test environments in an efficient, effective way is a major engineering discipline in its own right. There’s little point in repeating the same test again and again with little variation. That would consume resources and delay product release with little additional benefit. Testing is, therefore, a creative activity. On the other hand, the more that test processes can be automated, the easier it can be to ensure they are completed in a comprehensive manner.
One new factor in recent times is the ability for technology to be tested in virtual environments, that is in simulations. The principle of “Analyse via simulations” urges that attention be given to simulations in which products and methods can be analysed in advance of real-world deployment, with a view to uncovering potential surprise developments that may arise in stress conditions.
Inevitably, each simulation environment is likely to have its own limitations and drawbacks. They won’t fully anticipate all the eventualities that may occur in real world situations. However, over time, these simulations can improve, becoming more and more useful, and more and more reliable.
Creating and maintaining best-in-class simulations is likely to require (once again) the support of public funding and resources.
Maintain human oversight
Discussion of the role of simulated environments brings us to the final principle in this section – the principle of “Maintain human oversight”.
An increasing role in the development of new technology is being played by automated systems operated via artificial intelligence. These systems assist with the specification, design, implementation, verification, testing, monitoring, and analysis of new technology. They can make the overall process faster and more reliable.
However, although recommendations for next steps in developing products and methods will increasingly originate from software or AI, control needs to remain in human hands. We must ensure that such proposals arising from automated systems are reviewed and approved by an appropriate team of human overseers.
That’s because our AI systems are, for the time being, inevitably limited in their general understanding.
It’s also the case that any one human is limited in their general understanding. That’s why the principles of “Require peer reviews” and “Involve multiple perspectives”, covered earlier, come into play. Just as we should avoid putting too much trust into any one AI system, we should avoid putting too much trust into any one human reviewer.
To extend this point: Rather than relying on the analysis of a single AI review system, we should look for ways to have multiple different independent AIs review the recommendations for product development. But in all cases, the final decisions in any contentious or serious matter should pass through human oversight.
This also means that we humans need to regularly practice making independent decisions, without becoming overly dependent on AI tools that might, in unexpected circumstances, mislead us or let us down. Again, simulated virtual environments can provide useful practice situations. A group of humans can take roles in a collaborative “game play”, that features the zigs and zags of technology development and deployment in a simulation of competitive, fast-changing circumstances. At the conclusion of the exercise, the participants should conduct a retrospective:
- What did they learn?
- What would they do differently on another occasion?
- What are the limits – and the strengths – of the simulated exercise?
Finally, in this in-depth review of the Singularity Principles, let’s move on to the principles covering the evolution and enforcement of the principles themselves.