FinTech: Who is responsible if AI makes mistakes when suggesting investments?

by Giacomo Lusardi and Alessandro Ferrari

The recently reported legal action brought for damages due to wrong investments resulting from algorithms-based automated decision-making processes is one of the first known cases of this type. The case has received some attention from the media worldwide and has contributed to reopen the debate on the issue of liability connected to the use of Artificial Intelligence systems (“AI”). The question at issue is, in brief: who is held liable for the damages caused by AI and who shall compensate such damages, if any.

The object of the legal action is K1, a system that works through a machine learning algorithm fed by a large variety of data, such as real-time news and social media posts, capable of identifying correlations and recurrent patterns between the data from which predictions on investor sentiment and stock market trends are inferred. Moving from the correlations obtained, the system sends instructions to the brokers on trades to be executed, adjusting in real time its investment strategy thanks to its continual learning algorithm.

According to the media, the investor has sued the hedge fund to which it had asked to manage part of its money using the K1 system. In a word, the investor has allegedly claimed 23 million dollars as compensation for the losses suffered, and complained about a hyperbolic misrepresentation by the fund’s managers of the abilities of the K1 system. The investor has apparently invoked, in this case, what in the Italian legal system might be equivalent, amongst other instances, to (i) the vice in the formation of the intention to create a legal relationship, such as willful misconduct (Section 1439 of the Italian Civil Code), consisting in the misrepresentations made by the other contracting party which induced the investor to enter into the agreement, or (ii) a form of pre-contractual liability (Section 1337 of the Italian Civil Code), namely, the negotiations conducted in bad faith or the information withheld in such negotiations, which are both aspects that become even more relevant in the investment sector where transparency and good faith conducts are at the basis of the fiduciary relationship established with the investor.

Systems like K1 are “weak” forms of artificial intelligence, advanced machine learning algorithms that lack self-determination, are unable to understand the processed data, and work as a mere tool aimed at supporting humans in their activities. Their ability to automatically learn, increases however their independence and makes them capable of making decisions that affect the external world, and cast doubts on their being just mere tools (see, in this respect, the European Parliament resolution of 16 February 2017 with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)).

For the time being, a “strong” or “general” AI capable of understanding the world in all its complexity, learning from experience, reflecting and making autonomous decisions, as human intelligence does, seems to be a purely theoretical technology. Hence, the present scenario is not yet one in which the best solution would be to consider AI systems as legal entities and, consequently, holding them liable for the damages that they may have caused during their functioning. As things stand now, the solution to the issue concerning the liability related to AI system is to be found in the instruments currently available in our legal system, without lingering, as we will not do hereunder, on the ongoing debate on how to regulate AI and robotics and if the existing legal categories are or not adequate to govern them.

In a nutshell, depending on the specific case and the line of reasoning adopted, the liability for damages caused to third parties by the (mal)-functioning of the AI systems might be ascribable to the relevant manufacturer (Directive 85/374/EEC), programmer, or to the owner or user of the AI system or, also, to all or some of them, depending on how liability is being distributed. The aforesaid hypothesis should be treated as objective liability, in which case the damaged party is required to demonstrate the damage suffered and the causal nexus existing between the damage and the damaging event rather than the negligence or willful misconduct of the injurer.

In such a context, transparency in AI systems plays an essential role that may be split into three different main sub-categories (see the Ethic Guidelines for Trustworthy AI, by the EC’s independent high-level expert group on artificial intelligence): traceability, namely, the possibility to identify the datasets and processes which led AI to make its decision; communication, namely, transparency on human or AI nature the system, but above all on the limits and potentialities of the latter; “explainability“, namely, the ability, on the one hand, to explain from a technical viewpoint the AI decision-making process (Section 22 of the GDPR) and, on the other hand, “justify” the human decision underlying the use of such system. “Explainability” constitutes also one of the pillars of the decision-making processes that are based solely on data automated processing, i.e., the cases in which the decisions taken, without human intervention, have legal or, in any case, significant effects on the relevant data subjects, as is the case with AI systems (for instance, an algorithm decides whether or not a loan is to be granted after the performance of a number of sophisticated processing of the borrower’s personal data, matching them with other data): to the extent that the law permits to make such type of decisions, the controller shall provide to the relevant data subject “significant information on the logics used” and explain “the importance and the expected consequences of such processing for the relevant data subject” (Sections 13, par. 2, sub-par. f) and 14, par. 2, sub-par. g), of the GDPR).

Traceability, communication and explainability are therefore, from a non-contractual liability perspective, three fundamental guiding principles that might enable the persons involved in the chain of development, supply and use of the AI systems to identify the evidence, if any, capable of cancelling the causal nexus between the damage and the damaging event, release them from liability, as well as demonstrate the contributory negligence, if any, of the injured party (see Section 1227 of the Italian Civil Code). On the contrary, from the viewpoint of contractual liability, compliance with the principle of transparency, as outlined above, may make it easier to demonstrate that all information necessary for the user of the AI system to enable it to make a conscious and well pondered decision, has duly been provided. In fact, if the relationship established is a contractual relationship, the supplier of the AI system must provide complete information to the relevant user on the limits and abilities of the system, as well as set out appropriate limitations of liability to the extent permitted by the law.

The level of transparency and details of the information to be provided will obviously very much depend on the industry sector and the type of AI system used; the decision of a consumer to rely on a robotic vacuum cleaner based on an inscrutable algorithm will be very different from the decision taken by an investor who intends to invest its money in an AI system the decision-making logic of which is not easily understandable. As already mentioned, when it comes to investments the duty to inform becomes even stricter: the inescapable “algorithmic transparency” must therefore go hand-in-hand with a prudent compliance with the duties of information disclosure set out by the applicable laws and regulations (see for instance the MiFID II Directive and the MiFIR Regulation).

Transparency in – and explainability of – AI systems is something, however, that is not always easy to obtain. The need for transparency, on the one hand, clashes with the legitimate interest of the AI systems manufacturers and programmers to protect their relevant IP rights by means of undisclosed know-how and trade secrets (see EU Directive 2016/943). Favoring transparency, on the other hand, may reduce the precision of the system: a deep-learning AI system may guarantee, for instance, a high degree of precision in terms of predictions, while its manufacturer or even its programmer may encounter serious difficulties in explaining the logic of its functioning.

One of the main challenges of the near future will therefore be finding a compromise between transparency in AI systems, IP rights protection and precision, for the benefit of all those involved, manufacturers and end-users included.