Artificial intelligence (AI) systems are undoubtedly one of the most influential new technologies of our time and are playing an increasingly important role in a wide range of sectors. It is exactly their distinguishing feature from ordinary software that makes them so tricky for the law: their ability to learn and operate, even make decisions autonomously raises hard questions about accountability and proof. The fast-growing use of AI, its widespread integration into all sectors from healthcare to finance not only means fast growing regulatory activity across the EU but also increases the risk of AI-related harmful activities and failures. Such failures naturally create a growing threat of AI-related litigation.
Veronika Brengel
Senior Associate
This article explores how AI‑related litigation is taking shape in Hungary and across the EU: the role of ex ante safety rules under the AI Act and how growing regulatory scrutiny of these rules will trigger early disputes, as well as ex post liability mechanisms. It also aims to provide practical advice for businesses on the most important steps to protect themselves, because the real question is no longer whether litigation will test these systems, but how soon – and whether businesses are ready to defend themselves in the courtroom, when an algorithm’s decision must be explained.
An expected ramp up in regulatory disputes
The purpose of the EU AI Act (Regulation (EU) 2024/1689) is to ensure safety and compliance with European values regarding the development and use of AI. Despite leaving the issue of civil liability to other instruments, it sets out detailed rules for the safe use of AI by deployers and providers and establishes a system of supervisory bodies responsible for enforcing compliance. It is not surprising that, in this early phase, regulators across the EU are under growing pressure to enforce the new framework, so regulatory disputes are likely to be the first wave of AI‑related litigation.
Upon examining the current regulatory activity, public enforcement has been limited to date however it is certainly expected to increase as phased obligations of the EU AI Act kick in. Member States had to designate their national competent authorities (national market‑surveillance and notifying authorities) by 2 August 2025 and are still focused on shaping how these bodies will operate in practice.[1]
Hungary’s implementing framework is already in place. The AI Market Surveillance Authority (MSA) is the Minister responsible for enterprise development (currently the Minister for National Economy). This authority may investigate AI systems, order compliance, establish the fact of unlawful use and impose fines.[2] The Notifying Authority is the National Accreditation Authority, responsible for supervising conformity‑assessment bodies.
Regarding remedies, there are no special rules in place against decisions of these authorities, so the general administrative procedure applies. This means no internal appeal against such decisions, but they can be challenged through judicial review before the administrative courts.
Upon the – nearly – full entry into force of the EI AI Act on 2 August 2026[3], companies will have to navigate not only the EU AI Act, but also other major EU digital regulations such as the ten-year-old GDPR (Regulation (EU) 2016/679) and the much younger Digital Services Act (Regulation (EU) 2022/2065), creating an increasingly complex framework for companies to comply with.[4] Industries most impacted in the short term will be technology and financial services[5], where high‑risk AI systems are most common.
A current example of AI-related regulatory dispute within the EEA concerns a complaint[6] concerning the alleged defamation by ChatGPT before Norway’s data protection authority: when a Norwegian user tried to find out if ChatGPT had any information about him, it made up a horror story that pictured him as a convicted murderer of two of his children, also including elements of his real life. The complaint filed by data protection watchdog noyb.eu claims that OpenAI violated the principle of data accuracy under Article 5(1) (d) of the GDPR by allowing its AI model to create defamatory outputs about users, asking for the deletion of incorrect data and and a fine imposed.
Current liability framework
AI systems are highly relevant to liability because they can cause damage in ways that do not fit neatly into traditional legal categories, and with their rapid integration into business and daily life, damage caused by AI systems has emerged as one of the major areas of AI litigation. Yet, the EU AI Act does not address civil liability, and with the withdrawal of the EU AI Liability Directive, originally proposed in 2022 in order to harmonize fault-based liability rules across Member States, we are left with a vacuum in EU-wide uniform liability rules for AI.[7] The question remains: how to map out the rules of responsibility when harm is caused by AI?
A key pillar of the liability framework is the revised Product Liability Directive (EU) 2024/2853 (PLD), which explicitly brings software – including AI systems – within the definition of “product.”[8] The PLD is a strict‑liability regime that Member States must transpose by 9 December 2026 and will sit alongside national fault‑based rules. In Hungary, the amendment to the Civil Code has already been adopted on 16 December 2025 with the new rules applying to products put into circulation after 9 December 2026.[9]
The most significant change is that by broadening the scope of “product”, the manufacturer – in a broad sense, including developers, importers and authorized representatives – is liable for harm caused to consumers by defective AI products, without requiring the injured party to prove negligence or breach of duty.[10]
Another novelty is that the revised PLD now includes compensation for the destruction or corruption of data and for non-material harm. The latter, however, is limited to situations where it results from physical damage or personal injury.[11]
Despite some major changes, the basic nature of product liability law, and its consumer-oriented focus will still restrict AI-related claims. PLD only covers damage caused by a defective product[12] and does not apply when the claim is based on other grounds (e.g. negligent human supervision) or where the claim concerns purely financial loss (such as AI‑driven trading losses) or purely psychological harm (for example, reputational harm). In addition, the PLD does not extend to claims where the injured party is not a natural person[13] (e.g. algorithmic errors causing business interruption).
For such claims that fall outside the scope of strict liability, national fault-based liability regimes will apply, putting the victims in a very difficult situation, as national liability rules are not yet adapted to address claims for damages caused by AI. The Hungarian civil liability system is no exception, as it follows the same fault‑based approach thus faces the same shortcoming in handling AI‑related claims: in order to seek damages, the proof of fault and causation is the victim’s burden: they must prove the unlawful act or omission of the defendant in deploying, developing or using the AI and the causal link between such unlawful act or omission and the damage occurred. Victims face serious challenges on both fronts: due to the self-learning capabilities and opacity of AI systems, (i.e the so-called “black box effect”) and their autonomous operation, it is not always clear whether an error or defect can be attributed to the system – and if it can, who is to be held liable: the provider, the deployer or the user – or whether a causal link exists between wrongful behaviour and damage.[14]
Although no examples of tort actions against an AI provider have been identified within the EU to date, this is expected to change once the revised PLD is fully implemented.
Questions of contractual liability typically arise when AI, as the subject matter of a contract, fails to perform as promised or advertised. In practice, this may involve a failure to meet agreed performance benchmarks or delivery timelines, or a defect in the AI’s capabilities.
Under Hungarian law, the claimant bears the burden of proving both the existence of a defect and its presence at the time of performance. Given the well-known characteristics of self-learning AI systems and their continuous evolution, defects may emerge not only during initial creation or programming, but also after deployment or delivery to the end user. However, due to the inherent opacity of such systems, it is often difficult to pinpoint when and where a defect originated, or whether the resulting damage stems from a design or development flaw (such as a malfunction in the underlying algorithm) or misuse or improper implementation by the user.
A separate issue concerns the scope of recoverable damages. Under most contract law regimes, including Hungarian law, recoverable damages are limited to losses that were reasonably foreseeable at the time of contracting. Therefore, losses caused by unforeseeable “black box” errors may not be fully recoverable even where a breach is established, unless the contract expressly allocates that risk.
In light of these considerations, the careful scrutiny of contractual provisions governing AI systems will bear increasing importance. Businesses supplying or providing AI are likely to prevent themselves by excluding or limiting liability for defective AI[15], to an extent made possible by national law.
Practical guidance
Given the issues described above, companies that develop or deploy AI should expect a rise in AI-related litigation, including cases with substantial financial exposure. To mitigate these risks, businesses should consider the following steps.
Step 1: Implement preventive measures. Ensure meaningful human oversight in automated decision-making processes, so that critical decisions are not left entirely to AI systems; obtain insurance coverage tailored to AI-specific risks and adopt cybersecurity measures to protect systems from unauthorised access.
Step 2: Establish AI governance. To comply with evolving regulatory requirements, organisations should implement a governance framework with clear lines of responsibility, internal policies for AI development and deployment, and mechanisms for ongoing monitoring and compliance.
Step 3: Review of contractual provisions. Where AI is the subject matter of a contractual service, review and amend contractual terms to clearly define AI-related liability and allocate risk appropriately between the parties. Include provisions on AI performance metrics (accuracy rates, error thresholds, required level of accuracy) and remedies for underperformance.
Step 4: Maintain Through documentation. Keep detailed logs of changes to the AI system’s structure and programming. This will enable parties to trace whether deficiencies existed at the time of performance and when they arose – crucial in the context of contractual disputes.
While these measures will not eliminate the risk of AI-related claims entirely, they should significantly reduce the company’s overall litigation exposure.
Author: Veronika Brengel
Read our latest articles

Interview with Dávid Kőhegyi published on Børsen
Read the latest interview with Dávid Kőhegyi published on Børsen, one of Denmark’s leading and most influential business and financial newspapers. Please note that the text

Átrendeződő erőviszonyok a zöld energia átmenetben: merre tart az M&A‑piac és a régió befektetési térképe?
A DLA Piper 2026-os energia átmeneti M&A jelentése (Energy Transition M&A Report 2026*) szerint a globális tranzakciós dinamika egyszerre mutatott visszaesést és erősödést.

Merre áramlik a tőke? Rekordszintű felvásárlási hullám formálja az M&A‑piacot
A vállalati felvásárlások és egyesülések piaca továbbra is érzékenyen reagál a geopolitikai és makrogazdasági környezet változásaira. A 2025‑ös év a bizonytalanságok ellenére – részben éppen azok hatására – az elmúlt évtized egyik legerősebb időszaka volt: a globális M&A‑volumen 41%-kal, 4,81 ezer milliárd dollárra nőtt, amely minden idők második legmagasabb értéke.
