How Insurance Companies are Embracing Machine Learning according to IVASS survey

The Italian Insurance Supervisory Authority (“IVASS”) recently published the results of a survey conducted in 2022 on the use of Machine Learning (“ML”) algorithms by insurance companies in their relationship with insureds. The survey involved 93 companies and is part of a broader IVASS strategy aimed at analyzing the evolution and impact of Insurtech.

The survey shows that just under half of the companies surveyed (40 out of 93) use Artificial Intelligence (AI) systems and that the vast majority of them (33) use ML systems.

The main areas of use include fraud prevention and claims management, mainly in motor liability, underwriting processes, pricing, and the identification of customer churn intention.

Regarding AI/ML systems governance, most companies participating in the survey still need to adopt specific policies. Only one company indicated they adopted a specific policy, while 19 others stated they were in the process of developing one and five that they had not yet planned to do so.

The impact on other company policies (e.g., risk management, compliance, internal auditing, or IT) is also limited for the time being: 19 companies indicated they had not changed their policies due to the use of AI/ML systems, while seven are in the process of adapting existing policies.

Interestingly, over half of the companies using ML algorithms stated they have internal mechanisms to assess fairness towards insureds and detect exclusions or discrimination. Fairness and non-discrimination are among the six core principles identified by EIOPA in its 2021 paper on AI governance principles, along with proportionality, transparency and explainability, human oversight, data governance and record keeping, robustness, and performance. Those companies that didn’t have safeguards regarding fairness towards customers stated they didn’t need them due to the nature of the algorithms used and the data processed.

All companies participating in the survey employing ML systems adopt a “human in the loop” approach, with human supervision verifying the results and making the final decision. In terms of transparency and explainability, the companies using closed ML models envisaged using tools to help explain their logic and internal functioning to avoid or limit the so-called “black box” effect.

It is also worth noting that most companies developed internal ML projects in partnership with technology providers, while a minority of the respondents (20%) used outsourced services.