Add a bookmark to get started

12 December 20245 minute read

AI Act and insurance: Effects and strategic opportunities

The sixth seminar in DLA Piper's Insurance sector series focused on the implications of the European Artificial Intelligence Regulation for the insurance sector.

DLA Piper professionals from the Insurance and Technology sectors and insurance industry representatives discussed strategic opportunities, key AI use cases, and steps to take to comply with the regulation. See what emerged.

 

The wave of generative AI and the AI Act

In his introductory remarks, Giacomo Lusardi pointed out that adopting AI in insurance has enormous transformative potential. But it's equally challenging in terms of compliance, data governance, ethics, and risk mitigation. Industry players have been using algorithmic models for some time, but it's the current wave of generative AI that promises unprecedented value extraction from the enormous amount of natural language data in supply chain operations.

The new European AI Regulation (EU Reg. 1689/2024 (AI Act), entered into force on August 1, 2024, assumes a crucial role. It provides a transitional period for compliance, and most of its provisions will be applicable after 24 months. The AI Act is cross-cutting legislation, but it includes standards and has specific implications for insurance. It will then be supplemented at the European level with technical standards and at the national level by connecting regulations. Insurance supervisors are already working to better specify its scope.

Requirements and obligations depend on the risk level of AI systems. The main and most onerous are provided for high-risk systems. AI systems for determining risk or premium in the case of life and health policies are considered high risk. Obligations vary according to the role of operators in the AI supply chain, more stringent on suppliers and less so on users (called deployers). Separate obligations are in place for models defined as “general purpose”, like the one underlying Open AI's Chat-GPT.

 

Levels of risk

Unacceptable risk

  • Manipulative personalization of policies based on deep knowledge of the insured, to the insured's detriment.
  • Biometric categorization to infer personal attributes such as political views, sex life, and constitute risk clusters.

High risk

  • Risk assessment and premium determination for life and health policies.

Limited risk (transparency)

  • Virtual assistants for onboarding and budgeting, policy distribution and sales, and claims management.

Minimal risk

  • Other systems.

 

How insurance operators should move forward

At the very least, insurance operators should map all AI uses in their business, train staff and distribution network, establish an AI governance framework, appoint a figure to oversee AI in the business and a committee composed of the relevant functions. But they should also have specific contract clauses for procuring and developing AI solutions.

Proactively adopting the AI Act will enable insurance operators to be ready for deadlines and avoid the high penalties of up to EUR35 million or 7% of annual global sales. Proactively complying can help reduce the uncertainty in innovating in a regulatory environment that has so far been unclear. And it can strengthen corporate image.

 

The panel

The panel, moderated by Giacomo Lusardi and Karin Tayel, included Roberto Calandrini, Chief Data & Analytics Officer, AXA Italia; Gerardo di Francesco, Co-founder and Managing Partner WIDE GROUP, Co-founder IIA - Italian Insurtech Association; and Vanessa Giusti, General Counsel, Generali Operations Service Platform.

They discussed the degree of maturity and awareness of the insurance industry with respect to adopting AI tools. It emerged how today we're still witnessing an experimental phase in the insurance industry. One of the greatest challenges for the coming years will be the full-scale operation of AI systems in processes, with the dual objective of profitability and reliability.

With respect to the panelists' individual experiences, it emerged that the developing AI solutions often takes place in-house, with significant prior experience in building the data platform and use cases.

The panel also explored the role of general counsel and the legal department in insurance companies implementing a disruptive technology like AI. Undoubtedly, the literacy process is crucial. Both lawyers and strategic figures in the AI implementation process need to sensitize management with respect to the perceived value of AI tools.

Panelists then discussed the requirement for high-risk system providers to adopt best practices in data quality and data governance, a theme that emerged when the GDPR came into force. The challenge, then, comes down to quality and service levels rather than regulatory compliance. Data is often seen as a by-product, but the moment it's integrated into business processes, the entire data supply chain has to function at full capacity. Right now, the critical issue concerns companies that haven't implemented a formal internal process with specific figures in charge to ensure that the data flows with a level of quality that makes the AI process work. The challenge that needs to be addressed concerns top management's perception that data needs to be dealt with in a new way, including at the level of resources to be allocated. In this regard, literacy again plays a key role.

Print