Liability in the digital and AI age: EU looks beyond the ‘AI act’

Following its April 2021 proposal for a European Artificial Intelligence Regulation, the Commission comes back to the topic of liability for damages in the digital age with a consultation open to the public until January 10, 2022.

The purpose of the consultation

Having launched the first part of the public consultation on October 18, 2021, the European Commission resumed the work started in 2018 in the context of the evaluation of the Product Liability Directive (Directive) (85/374/EEC). On that occasion, the Commission questioned the Directive’s relevance and effectiveness in view of the challenges posed by digitalization, the circular economy, IoT, AI and cybersecurity. These tech trends cut across all industries and will attract significant investment in the years to come.

The second part of the consultation aims to collect valuable information to specifically address the issue of damages caused by AI systems, regarding both product liability and national liability rules.

According to the EU institutions’ view, a clear and harmonized legal regime on product safety and liability is pivotal for businesses and consumers to take advantage of technological evolution, including AI.

The Product Liability Directive and its primary limits

The Directive applies to all mobile products regardless of technology, including AI. It introduced the concept of strict liability of manufacturers, whereby they are liable, regardless of the fault, for defects in products intended primarily for private use or consumption that cause personal injury or property damage exceeding EUR500 (with a few deviations among the EU Member States). The burden of proving the causal connection between the product defect and the damage lies with the injured party. In given circumstances, manufacturers can provide evidence to be released from liability (eg if they prove that the defect did not exist when they put the product into circulation).

As a result of the 2018 evaluation, the Directive was found to be effective overall but challenging to apply to digital economy products.

Many stakeholders highlighted the difficulty for injured parties to prove the causal connection between damage and defect, mainly due to information asymmetry on the products’ technical features and the burden of anticipating the costs necessary to carry out the relevant assessments. Even more so, this aspect is crucial in the case of complex AI systems (for example, deep learning), whose operating logic could sometimes be difficult to explain even to the producers and programmers themselves. Transparency and being able to explain technologies and processes are essential, with a level of detail varying according to the relevant sector and target audience.

Furthermore, the increase of services as part of composite goods, together with the growing autonomy and changeability of products, made the concepts of “product” and “producer,” “defect,” and “damage” much more complex than when the Directive came into force (as later confirmed by the Commission’s 2020 Report on the safety and liability implications of artificial intelligence, the Internet of Things, and robotics).

The most debated topics: Combination of hardware and software, circular economy and AI

The consultation focuses on the technological evolution of products and offerings, posing questions about the potential adaptation of the Directive to several scenarios. Among these, the native integration between software and tangible products, the supply of software able to interact with existing tangible products, the release of updates and patches to improve product functionality or correct errors, the increasing use of software or services to control products (eg a cloud-based service for the operation of a smart thermostat).

Sustainability is another central topic within the consultation, which also looks at the potential adaptation of the Directive to business models based on the circular economy. In the case of reconditioning and remanufacturing of products and modifications during their life cycle, the attribution of responsibility for possible defects is not always clear.

Regarding AI, the Commission once again highlights the difficulty for injured parties to prove defects, the causal connection between defect and damage, and the fault of the damaging party (where applicable), especially when AI systems are particularly opaque and complex. These issues also concern the cases of strict and aggravated liability envisaged by national laws that could already apply to damages caused by AI (for example, in Italy, liability for the exercise of dangerous activities, or for damages caused by the circulation of vehicles).

In this context, the Commission seeks feedback on the scenarios that could follow a failure to harmonize AI liability laws or divergent interpretations by national courts. These scenarios could include additional costs for companies, limitations on AI-based cross-border activities, effects on insurance premiums, and increased prices for AI-based products and services.

Measures under consideration at the EU level to ensure civil liability for damages caused by AI systems include alleviating or reversing the burden of proof on the injured party, and harmonizing the strict liability legislation and insurance-based solutions.

Feedback provided so far

After two and a half months from the launch of the consultation, the Commission has received 144 valid comments, mostly from citizens (75%) and companies (9.33%). However, the number of companies participating in the consultation still remains low (fourteen), as do consumer organizations (five) and academic/research institutions (five). Geographically, the highest number of contributions came from Germany (55%), followed by France (13%), Italy (6%) and Belgium (6%). Surprisingly no comments have been provided from China so far, though it is among the largest producers of AI-based technology and solutions.

If you would like to know more about this topic, and artificial intelligence regulation in general, please contact giacomo.lusardi@dlapiper.com.