Top 3 predictions on AI and IoT for 2019

by Giulio Coraggio and Cristina Criscuoli

There was massive growth of Artificial Intelligence (AI) in 2018, which followed the hype of Internet of Things (IoT) technologies during the previous years. But what is going to happen in 2019?

Here are our top 3 predictions for the year:

1. AI from fiction will become reality

The “I, Robot” novel by American writer Isaac Asimov was written between 1940 and 1950 and inspired the movie with Will Smith of 2004.

The novel featured a world where robots are part of our everyday lives, and this no longer appears to be far to happen. AI systems like the Google Assistant, Amazon Alexa, Apple Siri and Microsoft Cortana are always with us in our smartphones and control a constantly increasing number of devices in our homes.

But the increase is occurring also in the industrial sector. According to a survey run by GlobalData on over 3,000 companies in early 2018, 54% have already prioritized investments in chatbots, machine learning, and deep learning technology. But, more importantly, the findings suggest that penetration of AI will grow quickly, with more than 66% of respondents indicating that AI investments will be a priority by the end of 2019.

The potentials of AI are limitless and for instance Twitter recently announced that 95% of the nearly 300,000 terrorism-related accounts it took down in a six-month period were flagged for review by algorithms rather than humans and 75% of the suspicious accounts were removed before their first content posting.

Governments understood the relevance of AI for their countries and for instance France launched a AI strategy which provides for investments of € 1.5 billions, the US Department of Health and Human Services ran a pilot using AI to process thousands of public comments on regulatory proposals, the UK’s Department for Work and Pensions deployed an AI system to process incoming correspondence and the Italian Ministry of Economy and Finance implemented an AI-driven help desk to deal with citizens’ calls.

The results of the above-mentioned survey are also confirmed by our personal experience. We have been involved in the legal review of a number of AI projects, including the usage of facial recognition to identify customers and potential fraudsters, machine learning and chatbot technologies to automate relationships with customers in the contracting and customer support process and of IoT systems as part of both industry 4.0 projects and of smart home, connected car and telemedicine projects.

Our experience shows that companies are trying to exploit 4.0 and AI technologies, but still have a “3.0 approach” in the sense that they often underestimate that such technologies

  • lead to a change in the model of business of companies that unveils new legal risks (e.g. in terms of potential liabilities) which require new legal competencies and a cultural change in the company’s management and legal department;
  • require a deeper assessment of how to minimize risks and maximize benefits, also through the usage of data that increasingly become an asset of the company that needs to be protected and exploited and by means of a careful selection of suppliers and negotiation of agreements with them; and
  • need the support of third-party providers since, as already experienced with Internet of Things technologies, the costs and efforts of creating your own technologies might be excessive and offer lower results than those achievable through the cooperation with external suppliers.

As Satya Nadella, Microsoft CEO, anticipated in 2015,

Every business will become a software business

and this is definitely one of our predictions for 2019.

2. AI regulations and their enforcement will become an urgency

The evolution of artificial intelligence systems led Elon Musk to call for urgent regulations on AI in an extremely interesting and “scary” interview run by Joe Rogan.

His view is that

“Normally the way regulations are set up is when a bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry

but

AI is the rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’ll be too late, [—] AI is a fundamental risk to the existence of human civilization.

Isaac Asimov with his “Three Laws of Robotics” represents the first attempt towards regulating artificial intelligence. And recently regulators are at least trying to tackle such necessity with initiatives such as the recent draft Ethics Guidelines for trustworthy AI and the European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and their environment issued by the European Commission and the 2017 resolution of the European Parliament containing recommendations to the European Commission on civil law rules on robotics, providing some suggestions for achieving proper regulatory solutions, including for instance compulsory insurance for self-employed agents’ use.

An international cooperation is necessary to regulate AI. And this is the path followed by the EU Member States that signed in April 2018 a Declaration of cooperation on AI whereby they agreed to work together on the most important issues raised by AI, aiming to jointly deal with social, economic, ethical and legal questions.

The current attempts to regulate AI appear though more like the identification of general principles of good behavior rather than actual regulations, and that they lack of any binding effect and enforcement actions.

It is hard to say whether such gap will be filled in during 2019. But there is no doubt that if regulators do not take a firm move towards binding regulations that are able to limit a potential misuse of AI and IoT, without preventing their growth, we risk that their development

  • will be hindered in countries like those of the European Union where for instance the GDPR already considerably constraints the exploitation of technologies able to take automated decisions on the basis of personal data; and
  • will lead to major negative consequences and potential risks on matters like the allocation of liability for damages, if product liability rules are not “upgraded” to an environment where AI does not just follow instructions from the relevant manufacturer, but performs an independent reasoning that even its manufacturer might find hard to explain.

Besides it is not possible to control AI through “traditional” technologies and actions. Even police authorities will need to use AI in order to control it and enforce actions against it.

3. Ethical rules will become essential for AI

The first Asimov law of Robotics is that “A robot may not injure a human being or, through inaction, allow a human being to come to harm”, but such law might fall short with high complex AI technologies. As it was stressed in the movie version of I, Robot, the potential diversion between analytical and an ethical reasoning can become a major issue.

Analytical decisions are merely based on the likely outcome of an event, but ethics needs to drive decisions of artificial intelligence technologies since analytical decisions do not necessarily match socially acceptable decisions.

The most frequent example is the one of a self-driving car which decides to run over some pedestrians without turning since a potential turn would lead to a higher risk of injuring both the pedestrians and the passengers of the vehicle. But another relevant example would be the decision of companies to invest in the usage of AI in sectors that might be profitable, but could present considerable risks for humans, if the technology goes out of control.

AI should be implemented with care and consideration to avoid misuse and unintended consequences. Governments have a unique role in ensuring economic and social impacts of AI to be properly managed and setting the ethical and legislative frameworks for AI to be used safely in our communities.

The importance of an ethical approach in improving new technologies was particularly stressed by Tim Cook, Apple CEO, during his speech at the 40th International Conference of Data Protection and Privacy Commissioners. From Cook’s point of view:

Platforms and algorithms that promised to improve our lives can actually magnify our worst human tendencies. […] Technology is capable of doing great things. But it doesn’t want to do great things. It doesn’t want anything. That part takes all of us.

Companies like Microsoft identified six ethical values – fairness, reliability and safety, privacy and security, inclusivity, transparency, and accountability – to guide the cross-disciplinary development and use of artificial intelligence. Also, ethical committees are being established in large IT corporations.

But regulators will need to understand that the compliance with such ethical rules cannot be left to the discretion of companies that on the contrary shall

  • be obliged by applicable laws to comply with them;
  • be required to prove compliance with them; and
  • be accountable for them, with also potential sanctions in case of breach.

A famous quote from Isaac Asimov is:

The saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.

AI will grow quickly in 2019, regulators as well as police and judicial authorities need to create an appropriate environment to ensure the proper exploitation of such technology.

If you want to discuss the above, feel free to reach out Giulio Coraggio (giulio.coraggio@dlapiper.com) and Cristina Criscuoli (cristina.criscuoli@dlapiper.com).