Project for a Progressive Ethics

By Dil Green

A proposal for progress

Engaging in events and conversations around the themes of Artificial Intelligence, Trans/Post-humanism, Singularity scenarios and Digital Futurism, all sorts of questions arise which involve consideration of unknowns, suppositions, assertions and opinions. Despite these layers of unknowns, it is nevertheless clear that society will soon need to make some serious decisions on a wide variety of issues. The outcome of these decisions is likely to have significant recursive impact on the very nature of humanity.

cascade

Discussing these questions, the thought arises that the single most important tool we need in making these decisions is a robust ethical framework – namely, a framework which is widely shared and which is ‘fit for purpose’ in addressing change and uncertainty.

This is not an original insight – it seems to be commonplace. Eliezer Yudkowsky has been informally quoted as having said that,

Humanity will most probably be saved not by technologists but by philosophers.

However, what this ethical framework might actually be is typically assumed to be the responsibility of others, in some unspecified future.

Given that many commentators in varied fields subscribe to the idea that we are in a period of exponential change, one or more of these epochal phenomena will likely impinge on us in the next few decades, and so development of a useful ethical framework would seem to be an urgent undertaking. It is surely incumbent on individuals and groups who have reached this conclusion, not simply to ‘kick the can down the road’.

The time to start work is now.

A Progressive Ethics?

Of course, there already exist many and varied statements on ethics: the work of great philosophers, international declarations, legal frameworks, proposals in profusion. Why would we want yet another?

For a start, most are framed as static documents, closed to implications of rapid change; implicitly or explicitly, most have been developed in reaction to historical conditions, rather than with an eye to the future, and are set within frames of reference of a particular philosopher, tradition or class consciousness.

Clearly, existing frameworks will be important reference material, embodying as they do the best-intentioned thoughts of humanity over history. These, along with work by groups like the IEET and others within the futurist / progressive community, and the established practice of ethical committees within scientific, academic and medical establishments, must all be given serious consideration. However, it does not seem that any of these sources alone are immediately suitable for our purpose as they stand.

This proposal purposefully avoids any suggestion as to the content of a Progressive Ethics. Instead, the aim here is to start the ball rolling and to make some suggestions for a process and structure to support such a project, designed to allow it to meet the aim of being truly progressive, robust, practically useful and widely-accepted.

What do we need?

The proposal is that a Progressive Ethics is developed which can be of use to humanity in navigating the wide range of novel possibilities which must now be admitted as having the potential for significant impact on real futures (possibilities previously confined to the pages of speculative fiction).

Such a framework should help us to have better conversations – minimising the traps of misunderstanding and misrepresentation and enabling debate at ever higher levels based on clear shared understandings – even if these are understandings of disagreement.

We want this framework to be of practical use in deciding and implementing questions such as:

  • The development of reliably ‘friendly’ AI
  • The social management of a wide variety of technically possible modifications to strict biological life.
  • The implications of augmented humanity / transhumanism.
  • Effective and responsive approaches to inherently complex subjects such as human impact on the biosphere.

Suggestions for a start

These ideas are intended to start a debate about how such a project might get started, how it might be structured, how it might frame itself, and how it might best ensure that it remains relevant and responsive.

I suggest that we:

  • Frame the effort as the initiation of a process – a process that will continue to respond to new developments in knowledge, technology and culture. This must include the guaranteed provision (and expectation) that ‘forks’ of the project are permitted;
  • Set the fundamental aims of the project from the outset, and look to enshrining these in the foundational constitution of the body charged with maintaining and supporting the project;
  • Look for a structure for representing / communicating the framework which:
    • is not overly reductive, but remains rigorously rational,
    • strikes the most effective balance between clarity and simplicity on the one hand, and appropriate flexibility of application on the other,
    • supports the process-based approach without introducing undue ambiguity,
  • Design the process from the outset to be one which enables broad engagement without loss of focus – this will mean selecting appropriate democratic structures for the core body alongside processes for concentric levels of engagement to wider audiences.

All of these suggestions need elaboration, but the key aim of this post is to generate interest from people willing to take the fundamental idea of such a project forwards.

Get involved HERE (Transpolitica) or HERE (H+Pedia).

2016prismayelo150About the author

Dil Green trained and worked as an architect. Notable projects include the Wellcome Wing at the Science Museum and a pioneering eco-friendly GP surgery.

The heroic self-image of architecture as the profession that actually builds a better future appealed to him, as a pragmatic utopian – someone who believes in working today towards a better tomorrow. However the strong limitations of the discipline quickly became apparent to him.

Since the advent of the web and smart devices, it has become increasingly clear that, for good or ill, the future will be built on the basis of digital tools. More, the kind of future that will be built is critically dependent on which particular tools become dominant.

His energies now go towards building digital tools and the social understanding around them that lead to the most positive outcomes for humanity that he can discern. He is interested in grass-roots, bottom-up developments, ones which can side-step power structures, ones which diminish the need for ‘approval’ from above, ones which empower humans acting in small groups towards human ends.

14 thoughts on “Project for a Progressive Ethics

  1. Thank you very much for drawing attention to this crucially important topic! It’s actually astonishing how little effort has been put into developing future- and transhuman-compatible ethics, rather than settling for pursuing the implementation of contemporary progressive ethical imperatives, like human or animal rights (which are of course of immense importance, too). There are certainly parallels to politics: Developing new political systems or just new political initiatives is much more difficult than pursuing already existing political agendas.

    First, I want to point out some potential problems with this kind of project.

    1. There seems to be a serious fault line between anthropocentric and non-anthropocentric ethics. While humanists argue from the point of utility for humans, posthumanists include the interests of entities other than humans into their considerations. Transhumanists can lie anywhere on that spectrum. The conflict between both philosophies is exemplified in the discussion about “friendly” AI. AI has to serve the purposes of humans unconditionally, it seems. But what if it’s a sentient AI and the interests of humans are problematic? Shouldn’t we grant those kinds of AIs rights and accept them as relatively equal partners in the quest for the highest values and goals? While many fear the “unfriendly” AI, the more immediate danger comes from “unfriendly” humans. Discussing such issues may be unpopular, but it is absolutely essential. We can’t allow a fixation on anthropocentricm, merely on the basis that it’s a popular position and we happen to be humans.

    2. Ethics is not an easy subject. At the same time, it’s hard to tell what would qualify a person to make valuable contributions to a debate on ethics. Intuitive answers may be terribly biased, while academic thought experiments may prove to be too detached from reality. When it’s not clear how to designate someone as “expert in developing ethical systems”, it’s unjustified to rely on discussions among “experts” alone. On the other hand, it would be equally unjustified to grant everyone an equal vote in these matters, because the debate would degenerate into majoritarianism. My conclusion is that only an open debate without rigid structures can serve as vehicle for actual progress in the realm of ethics.

    3. What if this project actually comes up with a consensus position within its own community, but this consensus gets rejected by the public at large? Would it be the duty of this project to stick to its position, and try to slowly convince the public? I think, that would definitely be necessary. As stated above, majoritarianism and giving in to the public is no solution.

    Secondly, there are certainly a lot of valuable resources out there. Among others, I want to mention the IEET, MIRI, and the Singularity 1-on-1 (now rebranded into singularity.fm) interviews hosted by Nicola Danaylov. Also, philosophers such as David Pearce, Nick Bostrom, and Anders Sandberg would probably be very interested in this project for a progressive ethics.

    Finally, there is already a place that could be used as starting point for further discussion, that actually has at least some overlap with Transpolitica: It’s the Fractal Future Forum ( https://forum.fractalfuture.net ), an open space for envisioning and creating a better future. Regarding the topic of ethics, there have already been some interesting discussions in the “philosophy” category: https://forum.fractalfuture.net/c/discourse/philosophy

    It’s easier to sustain detailed discussions on a forum like that than on a blog or Facebook.

  2. Thank you! Firstly for the positive reception and secondly for your insightful comments. I’ll try to address them, but the opinions will be my own. I am keen not to prescribe the character of what I hope will emerge, hoping rather to engage a wide community of interest. My optimistic ambition is that, if the process and structure can be well-constructed, such a community should be empowered to produce something which will have been freely developed by that community. In line with the comments on the ‘about’ page of Fractal Future Forum about a fractal society of overlapping communities, I would hope that this process could enable a wide variety of detailed ethical positions to develop, which have all grown from a shared set of foundational ethical positions.

    1/ I agree. I would suggest that it is increasingly necessary to develop an ethics that looks to ‘sentience’ and ‘consciousness’, in addition to ‘life’ as its watchwords, and considers humans and humanity merely as the strongest and best understood examples we have for the first two. A progressive ethics, in my view, will be equally interested in the expansion of our understanding of consciousness to other species (last week New Scientist reported experiments which suggest that bees experience something which seems only to be describable as emotion) as it will be interested in ensuring that its statements will apply usefully to as yet unknown intelligence and consciousness that we may encounter – whether by invention or encounter.

    2/ I agree that an open debate without rigid structures is needed to open the subject up, and to garner broad support. However, I would like to take things beyond debate at some point, and begin to produce agreed accounts of ethical frameworks. The structure needed to do such things is obviously a topic for debate, but at the moment, i would suggest that some version of ‘deliberative democracy’ [https://en.wikipedia.org/wiki/Deliberative_democracy] might be appropriate.

    3/ By calling for a ‘Progressive Ethics’, I have (hopefully) already guaranteed that the project will not immediately appeal to a majority of the population. The constituency I am interested in to develop the framework is ‘self-identified progressives’ – which is simultaneously prescriptive and wide-open, the hope being that this will attract a broad enough selection of people who nevertheless are interested in progressive ideas about ethics. The proof of the pudding will be in the eating – if a framework is developed which has utility, and which can be shown to address novel situations and conditions pragmatically, ethically and effectively, then it should begin to gain wider appeal. Existing ethical committees working in medical and scientific fields do not subject their ethics to popular democratic votes (apart from perforce operating under local legal structures) – the people operating those committees look around similar bodies for workable tools. If PfPE is productive, its tools should be among the most widely useful, and gain gradual acceptance that way.

    Resources – of course – in fact the danger will be of drowning in resources – another reason to attract a broad community, and develop useful structures for productive discourse.

    … and I’ve used the term discourse to bring me neatly to your forum – (I like the discourse forum tool – as it happens I’m a member of kabissa, which you link to). How best do you think we could start a conversation there?
    Dil Green

    • I would not be able to participate fully on this project, at least I cannot make such a commitment now. On the other hand, as you know, the subject is close to my heart (literally). So, let me comment briefly on what I have read so far on Transpolitica website and on your own website.

      1. First of all, if the project is to be successful and resonate among as many people as possible, its name is not that inspiring. I understand your intentions but perhaps it could be communicated in a more obvious way, something like “Human’s ethics” which would differ from say, Global, World, or even Humanity’s ethics, sounding more futuristic and also indicating the wholesomeness of the proposed ethics.
      2. I love the structure of the project on your website – simple and very clear
      3. I could not find the ultimate delivery (what would be the form), nor, what is the audience. In order to hope for any impact of the project, we would need a concrete audience i.e. at the moment perhaps the International Court of Human Rights or International Court of Justice (UN) – I would be less keen on that. I am sure there are other or perhaps better bodies for possibly turning it into a kind of Human’s Constitution. Grand words, but what you are proposing can only be described as grand.
      4. Following your Framework Structure (I like the idea behind it). I see its application at two levels. The first one deals with the nature of the project itself. The second one – with the dissemination of the work to be done, the results agreed and then implemented. They will be many actors involved.
      5. Therefore, a project like this to be credible, needs some involvement, feedback or co-operation/ participation of the people or organisations that have already been involved in defining new, human ethics, like the Oxford’s the Future of Humanity Institute with Nick Bostrom.
      6. I am sure there are quite a few eminent philosophers, lawyers and some politicians already engaged in this subject. Regarding politicians, lawyers, the name that I would put forward is Lord Charles Falconer, former chancellor and eminent lawyer. In The question I would raise is this: what is so fundamentally new that we would be delivering? Is this the approach The Framework Structure and/or its very direct application for uploading it to Superintelligence?
      7. In this context, I do not agree with Eliezer Yudkowsky that regarding what we as humans need to do to prepare for the arrival of Superintelligence is that “…we actually need to arrive at a good-enough answer and use it.”. It is just too risky. I think this should be a kind of a fall-back option rather the ultimate delivery. We need comprehensive and as far as possible universally agreed human ethics that we would pass to the next generation.
      8. Overall, I think it is at least worthwhile trying to launch the project and see what the reaction would be.

      • In response to your points:
        1/ Name: I agree – a snappy name would be good. I’m particularly keen, though, not to put myself in the position of being identified as a ‘founder’ or anything like that, of this project. The current phase is one of working to collect a few people with whom to do some scoping and formulate a plan of campaign. It can acquire a good name once it has some good people!
        2/Website [http://digan.dilgreenarchitect.co.uk/project%20for%20a%20progressive%20ethics%20(pfpe).html]: Ta! It’s built with a wonderful product (mac only unfortunately) called VoodooPad (another terrible name…) – a lightweight personal wiki building tool. https://plausible.coop/voodoopad/
        3+4/ Deliverables and Intended Audience: Important questions! As I say above, once more people are involved, these questions will be properly considered; however, if I had to make it happen today, it would be delivered in the form of a Pattern Language network of linked topics, with reasoned argument and recommendations. I’m writing more about this approach on the website: http://digan.dilgreenarchitect.co.uk/pattern%20language.html.
        As far as audience goes, I think that the public is the main audience – the project would be maintained online with maximal scope for engagement. I also envisage sub-sets of the framework being refined for use in specific domains (ethics committees).
        5+6/ Other Organisations, Smart People: Of course, there are many organisations and individuals that have thought hard and made significant contributions in this area – there is no intention to compete with these. The purpose of the Project is less to develop original ethical insights or vision – more to bring the work that already exists to a wider group, which will be tasked with integration and formulation of this work into coherent, communicable and applicable form. Again, if I had to do it tomorrow, I would be considering a ‘deliberative democracy’ session with a few hundred interested citizens, with delegates from the sorts of bodies you refer to acting as experts supporting the process.
        https://en.wikipedia.org/wiki/Deliberative_democracy
        7/ I am keeping my personal views at one remove from the project – indeed I believe that the structural approaches I hope to develop will support a variety of ethical frameworks – generally in agreement on fundamentals, but differing in response to different topics. The aim is NOT to develop ‘the right answer’ – but to enable some step-change in the ability of humans to engage in ethical debate and to apply ethical thinking to complex situations and newly-posed questions; in this context, a diversity of possible approaches must surely be considered useful.
        8/ Well, I’m doing what I can! Thanks again for your encouragement.

    • Hi Dil,

      I just wanted to applaud your efforts and also let you know about work I’m involved in that might be of interest for you / colleagues to also join if interested. Plus, I’d be happy to mention your work in our continuing efforts.

      I’m Executive Director of a Program called The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems. Website is here: http://standards.ieee.org/develop/indconn/ec/autonomous_systems.html

      We released the first Version of a paper in December called Ethically Aligned Design you can access on our site. This is version 1 of a document created by over one hundred AI/Ethics thought leaders from The IEEE Global Initiative that contains over eighty pragmatic Issues and Candidate Recommendations for technologists to utilize in their work today to create a positive future. We are actively looking for feedback you / your colleagues can provide via our submission guidelines process listed on the site.

      And again, if you / colleagues are interested in getting involved, please just let me know. My contact deets are on the site.

      Cheers and warm regards,
      JCH

      • John,
        Thanks so much for the encouragement and the link. It certainly feels as if smart people around the world are doing some useful thinking around this subject.
        This is an exciting moment – thus far in human development, matters of ethics have in general been left to philosophers; the involvement of scientists and engineers, as well as the wider tech community, in looking for ethical frameworks which are coherent enough to be implemented in code is likely to bring new perspectives and perhaps more rigorous framings.
        I will share your message with the meetup group started here in London [https://www.meetup.com/Project-for-a-Progressive-Ethics/], and I will bother you with links to any substantive work we produce.
        Regards, Dil
        PS: some of the links above appear to be broken – here’s the central one again: http://digan.dilgreenarchitect.co.uk/project%20for%20a%20progressive%20ethics%20(pfpe).html

  3. That doesn’t sound like the sort of thing Eliezer Yudkowsky would say, and I suspect that informal quoter was misquoting something more along the lines of “Blindly charging ahead into AI is not going to end up turning out well unless we do some work of the sort traditionally regarded as ‘philosophical’, except we actually need to arrive at a good-enough answer and use it.”

    • Eliezer – Thanks for providing a more representative quote. Are you happy for that new sentence to be quoted under your name?

      Before posting this article, I did search for where you might have said the originally quoted text, and all I could find was the unsourced remark in a comments thread at http://www.meetup.com/London-Futurists/events/233447374/, “as Eliezer Yudkowsky, an AI expert said at an AI conference in 2014, Humanity will most probably be saved not by technologists but by philosophers.”

      • I second David’s thanks, and would be pleased to edit the piece.
        Eliezer, I have been wondering where best to point at this article within LessWrong. Any suggestions?

  4. Pingback: Project for a Progressive Ethics | Digital Anthropology

  5. Further development.
    https://www.meetup.com/Project-for-a-Progressive-Ethics/messages/boards/thread/50769243
    Some significant insights / decisions have informed the development of the Project, which are embodied in this update:

    A/ the understanding that an Ethical Framework cannot be built as a binary logic tree, upwards from some foundational base propositions (as appears to be the case in statements on ethics from the Enlightenment onwards. Our Ethical Framework will be a network of inter-related propositions.
    A.1 – this means that we accept moral relativism
    a.1.a – the framework is a process (continually developed on the basis of social interaction)
    a.1.b – the social process of production, will act as a ‘sea-anchor’ – regulating drift, and ensuring path-dependency
    A.2 – Ethical dilemmas will initially be ‘pointed at’ some particular Ethical Proposition – selected on the basis of pattern recognition as having basic relevance; but will actually connect to a wide network of related Propositions, all of which may be explored for salience.

    B. The realisation that addressing impending technological shifts can only be achieved on the basis of an Ethical Framework that has wide social engagement and levels of acceptance as, at the least, a useful process for discovery. That we need to build a Social Ethics – one which addresses all of the ‘ordinary’ human questions of ethics. On the basis of some level of success at this, we can hope for effectiveness in looking outwards.
    B.1 – We need some crowd-sourcing process, that engages a wide range of groups in ethical debate, capturing the thinking, reasoning and relationships they identify on a mass-scale.
    B.2 – We will need to do pattern-recognition type work (using a secondary layer of human groups) to assemble a network graph of Ethical Propositions and their relations which can begin to be useful.
    B.3 – The resulting Ethical Framework should be exposed / made discoverable through a web/app interface as a navigable network graph, which can be used for ethical investigation/discovery/engagement.
    B.4 – The relevance and applicability of the Ethical Framework should be continuously assessed – feedback mechanisms are required

    C. That development of such an Ethical Framework can be seen as some sort of re-constitutive work on a society which has been living with Modernism for centuries, Modernism which has been smashed public morality and religion, and is still smashing up rigid cultural norms – gender, racism, patriarchal attitudes. All valuable and necessary work – but it has a Janus face, in that the process tends to atomise us, wall us away in our own personally constructed existentialisms.
    That the Project cannot propose anything like a Public Morality – something which can be used as a socially applied yardstick in judgement of another.
    That in fact it must be designed from the outset to resist any attempt to use it in such a way.
    That it must propose a new mode of shared social code: one that is are non-coercive, voluntarist, liberatory?

Leave a comment