EU: INFLUENTIAL GUIDELINES ON AI ETHICS

The EU’s High Level Expert Group on Artificial Intelligence (“AI HLEG“) has published its much anticipated guidelines on the ethical use of AI, with the title ‘Ethics Guidelines for Trustworthy AI‘ (“Guidelines“).  The Guidelines articulate a set of non-binding but influential principles for the ethical development and implementation of AI systems in  Europe.  They are particularly focused on AI systems which impact human beings, either because the systems make decisions which effect humans, or because they replace roles previously performed by humans.   They represent a roadmap for future law and policy making in this area, much of which is likely to take place in the fields of privacy, security and data regulation.

Background

The AI HLEG was set up by the EU Commission in 2018, and tasked with producing a set of guidelines to support the Commission’s vision of an ethical, secure and cutting-edge AI industry in Europe.  An initial draft of the Guidelines was submitted for public consultation, and received detailed feedback from a number of stakeholders, including industry bodies with a particular interest in AI systems with a human impact, such as Insurance Europe, private companies particularly invested in the field, and from academics.

Key Principles and Requirements

The Guidelines create the concept of ‘trustworthy AI’, and identify three core components to any trustworthy system: (i) that it is lawful; (ii) that it is ethical; and (iii) that it is robust.  Focusing on points (ii) and (iii), the Guidelines develop seven requirements for a trustworthy AI system:

  1. Human agency and oversight;
  2. Technical Robustness and safety;
  3. Privacy and data governance;
  4. Transparency;
  5. Diversity, non-discrimination and fairness;
  6. Societal and environmental well-being; and
  7. Accountability.

It is notable that many of these broad ethical requirements can be mapped closely on to existing privacy law obligations, in particular those contained in the GDPR.  For example: the right to challenge and review automated decision making; the right to object to profiling; the right to transparency and to receive detailed information about the use of your personal information; the requirement for technical and organisational security to guard against harm; and the requirement for accountability on the part of data controllers.

Practical Application

The final section of the Guidelines sets out an ‘assessment list’, which is essentially a practical checklist decided to concretise the broad requirements listed above into a set of controls which can be used to assess an AI system in its development phase.  During a pilot phase, the assessment list will be tested by stakeholders to see how well it works in practice, leading to feedback in early 2020.  In general, any organisation looking to either develop or implement new AI systems would be well-served to have regard to both the requirements and to the assessment list, as these serve as helpful indicators of the likely alignment of the AI system both with future regulations which may emerge out of the guidelines, as well as with current laws, such as the GDPR, which share many of the same themes.

James Clark, Senior Associate, DLA Piper UK LLP