Ensuring development takes place responsibly
Five separate principles concern methods to ensure that development takes place responsibly:
- Insist on accountability
- Penalise disinformation
- Design for cooperation
- Analyse via simulations
- Maintain human oversight
Insist on accountability
The principle of “insist on accountability” aims to prevent developers from knowingly or recklessly cutting key corners in the way they construct technology solutions.
A lack of accountability often shows up in one-sided licence terms that accompany software or other technology. These terms avoid any acceptance of responsibility when errors occur and damage arises. If something goes wrong with the technology, these developers effectively shrug their shoulders regarding the mishap. That kind of avoidance needs to stop.
Instead, legal measures should be put in place that incentivise paying attention to, and adopting, methods that are most likely to result in safe, reliable, effective technological solutions.
As always with legal incentives, this will require high-calibre people who are well-informed and up-to-date, working on the definition and monitoring of these incentives.
As a special case of insisting on accountability, the principle of “penalise disinformation” insists that penalties should be applied when people knowingly or recklessly spread wrong information about technological solutions.
Communications that distort or misrepresent features of a product or method should result in sanctions, proportionate to the degree of damage that could ensue.
An example would be if a company notices problems with its products, as a result of an audit, but fails to disclose this information, and instead insists that there is no issue that needs further investigation.
Again, this will require high-calibre people who are well-informed and up-to-date, working on the definition and monitoring of what counts as disinformation. The payment and training of such people is likely to need to be covered from public funds.
Design for cooperation
Another initiative that is likely to need public coordination, rather than arising spontaneously from marketplace interaction, is a strong preference for collaboration on matters of safety, rather than a headlong competitive rush to release products as quickly as possible, in which short-cuts are taken on quality.
Hence the principle of “design for cooperation”.
For example, public policy could give preferential terms to solutions that share algorithms as open source, without any restriction on other companies using the same algorithms.
Public funding and resources should also be provided to support the definition and evolution of open standards, enabling the spirit of “collaborate before competing”.
Frankly, the definition and timely evolution of open standards is a hard task. It will require high-calibre people, who are well-informed and up-to-date, working on the definition and evolution of these standards.
In turn, this is likely to require public subsidy, to ensure that it happens in an effective manner that can win the respect and trust of the companies whose solutions will be impacted.
Analyse via simulations
One factor that has always helped to design and produce new technology is previous technology, including tools and components.
This includes test environments, in which new technology can be put under stress in a variety of circumstances, before being released for wider deployment.
Designing and using test environments in an efficient, effective way is a major engineering discipline in its own right. There’s little point in repeating the same test again and again with little variation. That would consume resources and delay product release with little additional benefit. Testing is, therefore, a creative activity. On the other hand, the more that test processes can be automated, the easier it can be to ensure they are completed in a comprehensive manner.
One new factor in recent times is the ability for technology to be tested in virtual environments, that is in simulations. The principle of “Analyse via simulations” urges that attention be given to virtual reality simulations in which products and methods can be analysed in advance of real-world deployment, with a view to uncovering potential surprise developments that may arise in stress conditions.
To be clear, each virtual reality simulation is likely to have its own limitations and drawbacks. They won’t fully anticipate all the eventualities that may occur in real world situations. However, over time, these simulations can improve, becoming more and more useful, and more and more reliable.
Creating and maintaining best-in-class virtual reality simulations is likely to require the support of public funding and resources.
Maintain human oversight
Discussion of the role of simulated virtual environments brings us to the last principle in this section – the principle of “Maintain human oversight”.
An increasing role in the development of new technology is being played by automated systems operated via artificial intelligence. These systems assist with the specification, design, implementation, verification, testing, monitoring, and analysis of new technology. They can make the overall process faster and more reliable.
However, although recommendations for next steps in developing products and methods will increasingly originate from software or AI, control needs to remain in human hands. We must ensure that such proposals arising from automated systems are reviewed and approved by an appropriate team of human overseers.
That’s because our AI systems are, for the time being, inevitably limited in their general understanding.
It’s also the case that any one human is limited in their general understanding. That’s why the principles of “Require peer reviews” and “Involve multiple perspectives”, covered earlier, come into play.
It’s the same with AI. We should look for ways to have multiple different independent AIs review the recommendations for product development. And in all cases, the final decisions in any contentious or serious matter should pass through human oversight.
This also means that we humans need to regularly practice making independent decisions, without becoming overly dependent on AI tools that might, in unexpected circumstances, mislead us or let us down.