EUROPE: Artificial Intelligence, what can we learn from the GDPR?

Connected devices that exchange substantial volumes of data come with some obvious data protection concerns. Such concerns increase when dealing with artificial intelligence or other devices/robots that autonomously collect large amounts of information and learn though experience.

Although there are not (yet) specific regulations on data protection and artificial intelligence (AI), certain legal trends can be identified, also taking into account the new European General Data Protection Regulation (GDPR).


The GDPR requires data controllers to demonstrate compliance, including obligations to carry out at an initial stage a data protection impact assessment for each risky process/product and to implement data protection by design and by default.

This implies an obligation for software developers and other parties that intervene in the creation and management of AI to integrate the data governance process with appropriate safeguards, including, for instance, data minimization and data portability (which should cover both the data provided knowingly by the data subject and the data generated by their activity).

Furthermore, the GDPR requires security measures that are “appropriate” to the risk, taking into account the evolving technological progress. This is particularly relevant when dealing with the potential risks of AI, which by definition evolve.

The application of the above principles will be key for all parties involved to limit their responsibility, or at least to obtain insurance cover for the data protection (and related data breach) risks. In this respect, the adherence to industry codes of conduct or other data protection adequacy certifications will also help.

Informed Consent

Informed consent from the data subject is another key principle for the GDPR, as was already the case for most European jurisdictions. Such consent may not be easy to achieve within an AI scenario, particularly when it is not possible to rely upon predefined sets of instructions.

This is even more relevant if we consider that updated consent may not be easy to achieve for “enriched data”, certain non-personal data that have become personal data (i.e. associated with an individual) through processing combined with other data gathered by the AI from the environment or other sources.

This may lead to a substantial increase in requests for consent (through simplified, yet explicit forms), even when personal data are not being used. Such an increase may not necessarily entail an equivalent increase in awareness of data protection – as was seen with the application of cookie regulations in certain European jurisdictions.

When dealing with AI, it may be that under certain circumstances parties involved will opt for a request of “enhanced consent”, as is applied in Italy for certain contracts that impose clauses that are particularly unfavorable for the consumer. Such consent, however, will not per se exclude responsibility for the entity ultimately responsible for the data processing.


The GDPR provides that individuals shall have the right not to be subject to a decision based solely on automated processing, including profiling, unless such decision is provided by the law (eg. fraud prevention systems) or is necessary to enter into or perform a contract, or is based on the data subject’s explicit consent.

In the two latter instances, the GDPR also provides that the data subject shall have the right to receive an explanation by a “natural” person. The data subjects will accordingly have the right to express their opinion, and this may lead to increasing transparency as to the usage of AI, with specific explanation processes to be embedded in software architectures and governance models.

However, it will remain very difficult to determine how certain decisions are made, particularly when decisions are based on enormous data combinations. In such cases, any explanation may likely cover the processes, elements or categories of data that have been taken into account when making a decision.

It is likely that data governance models will go into detail on how certain decisions are taken by the AI, so as to facilitate explanations when required. Whether this will lead to rights similar to the principle of “software decompiling” rights in certain civil law jurisdictions is yet to be determined.

Undoubtedly, data protection awareness will become increasingly relevant for all AI practitioners. More sophisticated organizations will set up specific governance guidelines when dealing with AI, with such guidelines to address not only the overall technical and data feeding processes, but also a number of legal and ethical issues.