A white paper has been leaked which provides an interesting insight into the European Union’s plans for the future regulation of artificial intelligence (“AI”). It follows on the heels of a spate of activity from regulators, governments and international bodies who are trying to formulate a governance strategy to respond to the exponential growth in AI development and use. Other notable developments include the publication of the EU’s High Level Expert Group guidelines for ‘trustworthy AI’ and the OECD’s ethical principles for AI.
The European Commission’s paper pays particular regard to the implications of AI for personal privacy and the fundamental European right of personal data protection. In so doing, it considers how future AI legislation would align with and complement the EU’s data protection laws. From a UK perspective, the paper also raises interesting questions about whether, and to what extent, the UK will look to align with evolving European standards on AI, particularly in the face of potentially competing standards in the United States and China (which the paper cites as the current leaders in AI investment).
The Proposed Regulatory Framework
The shape that regulation might take is not definite, and the paper proposes a range of different models. Options include a lighter-touch voluntary labelling system for “trustworthy AI”; regulation which is limited in scope to public bodies (and which would hope to influence the private sector, but not regulate it directly); sectoral or technology specific regulation; and risk-based regulation which targets AI applications perceived as “higher risk” – a tricky threshold to define, the EU’s answer being to match high risk sectors (healthcare; law enforcement) with the technology’s inherent potential for impact on legal rights or material damage. In the context of sectoral regulation, an option – much reported to date, although it is just an option – envisages a temporary ban on the use of facial recognition technology in public spaces.
The paper also envisages a role-based approach to regulation, which distinguishes between the obligations and liabilities of different players in the AI lifecycle – from those who develop the algorithms which underpin the AI at one end, through to the business that acquires and uses the AI at the other end.
Alongside debate about the shape and scope of regulation, key potential requirements for businesses (many of which are consistent with what we have seen in other papers on AI governance) include:
- Transparency for developers, including disclosing the design parameters of the AI system (there are obvious challenges here in respect of confidentiality and IP).
- Transparency for users, including clarity for consumers about AI processes and potential outcomes (this information would likely complement and sit alongside a core privacy notice).
- Human oversight / review of AI decisions by a human (noting that human review of automated decision making is already a feature of the GDPR).
- Minimum quality and diversity requirements for the data used to train AI systems.
- Minimum security requirements to guard against perceived risks, particularly cyber threats.
- Apportionment of liability for harm caused by a product or service using AI (considering the responsibility of the developer separate from that of the user).
- Rights of redress for individuals and companies, including the use of alternative online dispute resolution.
Alignment with the GDPR and Data Protection Law
The European Council recently published a draft position on the GDPR (as part of the formal Article 97 evaluation process) that queried how the GDPR would apply to new technologies, chief among these being AI. The GDPR has (as the Council acknowledges) deliberately been drafted to be “technology neutral” – in other words, the general principles and requirements of the GDPR are supposed to be malleable enough for constant adaption and application to new use cases, unforeseen at the time the GDPR was conceived. However, there are certainly questions about how precisely to interpret some of those requirements in the face of AI, and the extent to which the GDPR is the correct legislative instrument for tackling the impact of AI on individuals.
The paper is helpful in advancing our thinking on these questions. It indicates simultaneously that (i) the GDPR will remain incredibly pertinent to AI, because of both the reliance on data at the “development” stage, as well as the potential impact on privacy at the “use” stage; and (ii) there is a need to complement the GDPR (and other relevant legislation) with regulation specific to AI.
The Position of the UK
The potential for impending EU legislation on AI creates interesting questions for the UK, which will formally leave the bloc at the end of this month, and is likely to leave the EU’s regulatory sphere at the end of the year when the transition period comes to an end. Assuming new AI legislation comes in shortly after that point, the UK will need to decide whether it wishes to tack closely to the European model, or forge its own path.
AI is a core part of the government’s modern industrial strategy, and in some areas the UK has good attributes for potential success in the sector. The UK has a strong research base, and, since 2015, has had a specialist national AI institute – the Alan Turing Institute. From a governance perspective, the UK’s Centre for Data Ethics and Innovation has been working through the ethical issues associated with AI (including many of those touched upon in the EU’s paper), and the Information Commissioner’s Office (“ICO”) has also shown a close interest in this area, developing (amongst other things) an AI Auditing Framework. For the time being, UK thought leadership closely mirrors that of the EU – unsurprising given that the UK currently applies the EU model on matters such as data protection, product safety and fundamental rights. Over time however – as in other areas – there is potential for divergence, and it is quite possible that other key trading partners (in particular the US, which is at the moment envisaging a lighter touch approach to federal regulation) will have competing models.
Whichever path the UK chooses to follow, it now looks more likely that there will be a model framework for AI-specific regulation – designed to fit hand in glove with the GDPR – on the UK’s doorstep.
James Clark, Senior Associate, DLA Piper UK LLP