Top 3 Legal Predictions on Artificial Intelligence

By Giulio Coraggio, Giacomo Lusardi and Cristina Criscuoli

AI systems grew quickly in 2019 and we expect to witness additional developments of these technologies in 2020.

AI is increasingly embedded in our everyday lives: companies are developing a number of AI systems, such as voice commands driven platforms, facial recognition tools and AI-based customer care chatbots. However, the rise of these technologies needs to be accompanied by the availability of accurate datasets as well as appropriate infrastructures and regulations.

  1. Too early for AI regulations, courts will set the first rules

Breakthroughs in artificial intelligence will dramatically transform our world in the coming years, putting a huge strain on the suitability of the pre-existing legal and regulatory framework to face the new challenges thereof. The impact of AI on many existing jobs, the risk of biases and discrimination, the liability and safety issues potentially arising from the use of AI-based systems, privacy and data protections concerns, the use of AI systems in judicial systems, the intellectual property ownership of products created by AI, fakery and misinformation are just a few critical issues policymakers will be called to address in the near future.

AI has been embraced by the EU and other major economies as a strategic priority able to bring economic and social benefits. Therefore, any unnecessary legal and regulatory constraints that may impair the development and dissemination of AI technologies should be avoided.

AI technologies are currently regulated by means of the existing legal and regulatory framework, as well as through a growing number of non-binding soft law instruments, namely guidelines, code of ethics and general principles such as, for instance, the OECD principles on AI. And major companies have already adopted self-regulatory codes to gain more trust and thus competitive advantage on the market.

However, the question is whether we actually need specific laws and regulations for AI systems, or could we rely on the existing ones, perhaps with some tweaks. The recent report by the Expert Group on Liability and New Technologies of the European Commission on the Liability for artificial intelligence and other emerging digital technologies reached the conclusion that there are some basic rules within the EU to address liabilities deriving from the damages caused by AI. But such rules need adjustments to be properly applied to artificial intelligence.

Our prediction is that 2020 will not be the year of “AI hard-law” – either in the form of a single comprehensive legal tool or in several different tools – depending on the specific sectors -. Policymakers are still debating on the very definition of “Artificial Intelligence” and striving to identify all the relevant AI-related regulatory issues. Being AI a cross-sector and self-evolving technology, the task is certainly not easy. In addition, any legislation on AI should be technologically neutral so to prevent it from becoming outdated in a few years. The latter further increases the complexity of regulating AI, whose distinctive feature is the capability of evolving and modifying itself at a super-rapid pace.

The time of a “hard” AI regulation has yet to come. In the meantime, courts all over the world will be the first to set new guiding principles and rules on the matter, even in civil law countries. Some interesting AI lawsuits have already been dealt with by judges, and we believe the trend will increase in 2020 and following years. However, the major question is whether courts are actually ready to deal with these issues.

  1. Full awareness on the value and increased exploitation of data will be reached

The functioning of AI systems is mainly based on big data: AI algorithms built on deep learning can work properly exclusively if they rely on large sets of accurate data, so that data quality represents a crucial element for a successful AI strategy. The possibility to automatically analyze data collected from many different sources, obtaining insights and correlations, enables companies to easily address both customers’ and business-partners’ expectations.

Our experience shows that companies are exponentially aware of the value of data and we expect this awareness to further increase in 2020. Companies which develop AI systems will intensify their investments on data exploitation, in order to make better use of technology and overcome data challenges. This statement is not merely referred to personal data, but also includes non-personal information. The importance of this category of data is indeed confirmed, among others, by the new EU Regulation No. 2018/1807, which entered into force in 2019 aiming to remove obstacles to the free movement of non-personal data across the EU.

Given the constant growth of data volumes, one of the key factors in developing successful AI systems is obtaining the full value of data. Therefore, we believe that in 2020 companies will improve data strategies by elaborating accurate projects on its use, starting from the collection stage. This approach is also imposed for personal data by the principles of “privacy by design” and “privacy by default”, which have been introduced by GDPR. However, having a forward-looking and clear vision on the purposes pursued in collecting and processing data – including non-personal data – is essential to efficiently exploit it. For this reason, companies will become more focused on the quality of the information collected rather than its quantity.

At the same time, we will witness the strengthening of data-synthesis methodologies by companies developing AI technologies. As a matter of fact, such companies may be challenged in access the right categories of information as well as the necessary amounts of data. This circumstance is particularly true for start-ups and small companies creating AI systems, given that possession of large sets of data represents a significant competitive advantage in an AI-driven economy. Hence, synthetizing the information owned to create new data can be an efficient way to increase the value of such information.

Moreover, an AI development strategy based on data quality imposes companies to carefully protect their data. This circumstance means that they will have to elaborate both a complete privacy compliance program and effective anonymization techniques, with regard to personal data. As per non-personal data, creating appropriate IP and contractual protection for databases is a crucial aspect to be taken into account. Ensuring compliance with applicable data laws as well as identifying appropriate legal instruments to protect confidential information will represent a key element to succeed in developing any AI strategy.

  1. Trust, explainability and ethics will be the key drivers of AI

As anticipated in our 2019 predictions, ethics will be essential for the development of AI technologies. Indeed, “analytical” decisions made by machines do not necessarily match socially acceptable decisions, and the risk of discrimination is right around the corner. Such risk may result, for instance, in having a loan request unduly rejected or from being unfairly excluded from a recruitment process, due to obscure algorithm calculations.

Over the past year, several institutional bodies and companies worked hard to put in writing ethical principles addressed to all AI stakeholders. In particular, the European Commission High-Level Expert Group on AI presented two valuable deliverables: the Ethics Guidelines for Trustworthy Artificial Intelligence and the Policy and Investment Recommendations For Trustworthy AI. The underlying key concept of both papers is that trust should be present in every phase of development, deployment and use of AI systems.

However, there cannot be trust without transparency, which is one of the major challenges posed by AI systems. The functioning of certain AI algorithms can be so complex they look like “black boxes”, whose outcomes may be difficult to understand even by programmers and manufacturers. “Explainability” will thus become crucial. It can be defined as the ability, on the one hand, to explain from a technical viewpoint the AI decision-making process and, on the other hand, to “justify” the human decision underlying the use of the AI system.

According to the IBM Institute for Business Value survey, 68% of business leaders believe that customers will demand more explainability from AI in the next three years. For instance, the applicant whose loan was unduly denied will be eager to understand the reason of the rejection, as well as the possible actions he/she may take to challenge the decision.

AI manufacturers are already on the forefront of this growing demand for explainability. For example, IBM has recently launched its AI Explainability 360, “a comprehensive open source toolkit of state-of-the-art algorithms that support the interpretability and explainability of machine learning models”. Based on its machine learning core, such tool is even capable of tailoring explanations on the specific recipient (e.g., the regulator or the affected user). Nevertheless, algorithmic explainability may clash with the legitimate interest of the AI systems manufacturers and programmers to protect their relevant IP rights by means of confidential know-how and trade secrets. One of the main challenges of the near future will be finding a meeting point between delivering transparency of AI systems and granting protection concerning IP rights.

Our prediction for 2020 is that trust, explainability, and ethics will increasingly be the key drivers of AI, and that AI decisions will be frequently challenged by users both in and out of court. AI manufacturers should be ready to give a comprehensible explanation of the functioning of their systems, and — at the same time — protect their intellectual property. Additionally, AI manufacturers and vendors should draft their contracts in a highly transparent manner, based on the actual capacity and limits of the AI systems used.