By Giacomo Lusardi, Karin Tayel and Andrea Olivieri
AI is all around us and it’s here to stay
AI is rapidly permeating various economic sectors. From financial services and insurance to life sciences and encompassing industries as diverse as retail, industrials, real estate, media and sports. AI is reshaping traditional paradigms and marking a shift in how enterprises operate and innovate.
In financial services, AI technologies are being deployed to boost customer service capabilities, streamline credit assessments, and strengthen fraud detection mechanisms. Similarly, AI models are transforming insurance operations in claims processing, underwriting, customer service, and risk assessment, and enhancing decision-making and resource management. In life sciences, AI enables groundbreaking advances in research and development, facilitating personalized patient care and innovating diagnostic and treatment methods. Industrials are also harnessing the power of AI to optimize operational efficiency and improve organizational resilience against market fluctuations and disruptions.
According to DLA Piper’s Global AI Governance Report from last September, large and medium-sized business are rapidly embracing AI, with 96% of organizations rolling out AI in some way and at least four projects live in each company. The report also reveals that 83% of companies have a defined AI strategy, and 86% of those have incorporated guidelines to steer their AI initiatives. This indicates a strong commitment to responsible AI adoption. But the report also suggests that while companies are taking steps towards compliant and ethical AI use, it will take more than these measures to address the scale of issues raised by AI.
Benefits and risks
Despite the transformative potential of AI to change business practices and drive unprecedented value creation, the path to AI integration is full of potential pitfalls. The EU AI Act has yet to be finally adopted and companies will have 24 months (with a few exceptions) after it’s entered into force to comply. But the existing legislation and sector regulations already pose several challenges on the route to AI corporate implementation, and the risk of issues, fines, investigations, and legal actions is significant.
One serious challenge is the lack of transparency, especially in complex deep learning models, such as LLMs (Large Language Models). This opacity makes it difficult for users to understand how AI systems make decisions, fostering distrust and resistance to adoption. It may also hinder efforts to identify and rectify biases in AI algorithms, as stakeholders might have problems in scrutinizing the underlying decision-making processes. Additionally, the absence of transparency poses challenges to regulatory compliance and ethical oversight, as it becomes more challenging to assess whether AI systems abide by laws, regulations, and standards.
AI systems have the potential to propagate societal biases, resulting in discriminatory outcomes. This occurs when the data used for training AI models mirrors societal prejudices, including those rooted in race, gender, or socioeconomic status. If the training data exhibits bias, the AI system might internalize and perpetuate these biases in its decision-making processes. For instance, in the insurance sector, if historical data used to train an AI-based risk assessment tool demonstrates a bias against certain demographic groups, the AI system could unconsciously perpetuate this bias by unfairly pricing policies or denying coverage to individuals from those groups, exacerbating existing disparities in access to insurance.
Further issues regard data privacy and cybersecurity. AI systems rely on extensive datasets to train their algorithms and enhance performance. These datasets encompass a wide array of information, and may include personal data such as names, addresses, financial information, and sensitive information like medical records and social security numbers. The collection and processing of such data raise significant concerns, such as the risk of data breaches and unauthorized access to personal information. In addition to the data privacy risks, implementing AI systems may entail specific vulnerabilities and threats, including potential breaches, data manipulation, adversarial attacks, and the exploitation of AI models for malicious purposes.
AI also brings about some challenges related to intellectual property (IP), particularly regarding training AI algorithms on third-party datasets and protecting AI-generated outputs. Training AI algorithms using proprietary datasets owned by third parties carries the risk of infringing their IP rights, potentially leading to legal disputes involving copyright, trade secrets, or other IP rights. Questions can also arise concerning the protectability of AI-generated outputs, such as software code, text, images, or other content. Determining whether the AI output can be protected through IP and defining the boundaries of such protection can be complex, especially when it involves combining and transforming existing works.
All these risks may expose companies adopting AI to severe liability towards their customers, partners and stakeholders. To mitigate these risks – aside from the requirements and obligations that will apply under the AI Act – companies should implement robust internal policies and guidelines to govern AI systems’ development, deployment, and usage. They should also incorporate contractual safeguards into agreements with third-party providers and stakeholders to outline responsibilities and liabilities related to AI usage. Technical measures, such as encryption, access controls, and anomaly detection, should be employed to protect data and AI systems from breaches and unauthorized access. Regular security audits and vulnerability assessments can help identify and mitigate potential weaknesses in AI systems and infrastructure. Furthermore, implementing organizational measures, such as regular employee training and awareness programs, can create a culture of accountability and compliance with regulatory requirements and ethical standards in AI deployment.
Insuring the unpredictable
On top of these measures, companies from various industries are assessing with their brokers whether the risks stemming from using AI systems can be considered part of their existing insurance coverage or if they must get new policies.
Although certain existing policies (such as PL/PI insurance, cyber and third-party liability) may cover some AI risks, there are significant gaps that require rethinking of existing policies and possible creation of new solutions depending on specific situations.
Risk assessment is the basis of the insurance contract. A risk can be insured if:
- It causes definitive loss taking place at a specific time, in a specific place, and arising from certain specific causes.
- The loss caused is accidental.
- Losses are predictable (predictability allows evaluating frequency and severability). and
- Underwriters can provide affordable premiums.
Carrying out an accurate risk assessment is fundamental both for underwriters and for insureds. In fact, underwriters can exclude or limit certain specific risks, and insureds can shape the best cover for them against specific claims. In case of AI liability, risk assessment is a new frontier to be explored. In such case, insurance is not a static concept: it could be very difficult to appraise the aggravation or reduction of the risks since risks can change rapidly.
The first policies currently available on the market granting coverage against third-party risks for AI provide a range of solutions to adapt existing insurance to AI challenges such as:
- Specific exclusions.
- Consistent deductibles.
- Co-insurance risk-taking.
- Specific coverage limits/sub-limits which can transform unquantifiable underlying risks into known maximum exposures.
In the next future, some risks currently covered could be no longer be insurable, at least not without a higher premium. At the same time, some risks currently left without cover might become better understood and become affordably insurable. More risks than previously could be eligible for coverage on reasonable terms, based on tailor-made evaluations.
The EU could provide more precise indications. The EU has intervened with a proposal for a Directive (the AI Directive), with the aim of harmonizing the liability regime in case of damages caused by AI systems; and the mentioned AI Act, which mainly aims at preventing damages.
Article 5 of the AI Directive states that the EU will consider imposing compulsory insurance cover. But it doesn’t clarify who would be subject to this obligation: companies that deploy and use AI? Companies that produce AI systems? The companies that sell AI systems? All of them?
The use of AI by carriers
The use of AI by insurance companies themselves will also contribute:
- To the implementation of the risk assessment. In this evolution, it’s likely that insurance will shift from its current state of “assess and indemnify” to “predict and prevent,” transforming every aspect of the insurance industry.
- To the differentiation of risks more precisely.
- To deeper and faster detection of insurance fraud.
- To accelerate claims handling and management.
- To require affordable premiums.
Because of the implementation of the risk assessment, according to various operators, the use of AI also by insurance companies will allow the expansion of the services in the following sectors:
- cybersecurity insurance
- blockchain integration
- climate risk assessment
In Italy the legislator has recently introduced mandatory insurance against catastrophic events. An accurate and prompt assessment of the risk with AI will be essential to respond efficiently to the demand of the insureds.
Nowadays the use of AI in claims handling is a concrete reality in motor and property insurance. Insureds can notify a claim providing all photos of the damages to be indemnified with a simple click from their phone and obtain immediate assistance for repairing them.
AI is already also used to manage claims: various Insurtech companies recently created software providing summaries of judicial pleadings and a quick analysis of the claim, speeding up the relevant management for claims handlers.
Key takeaways
- AI has benefits and risks.
- Insurability depends on understanding the risk.
- Using AI could imply new risks to be insured.
- New insurance products are expected to cover risks which could have been considered uninsurable in the past.
- The use of AI by insurance companies will allow:
- differentiation of risks
- creation of more tailor-made insurance solutions
- detection of insurance fraud
- affordable premiums
- faster claims handling processes
- implementation of cybersecurity insurance, blockchain integration and climate risk assessment