Add a bookmark to get started

Abstract_lights_P_1168
26 June 20245 minute read

The Insurability of AI risk: A broker’s perspective

In the previous issue of DeRisk, we examined the complexities of AI risk and discussed how companies can get insurance coverage in this area. This time, we spoke with leading insurance brokers to get further industry insights. Alessandra Corsi and Rossella Bollini from Marsh gave their perspective on the nuances of AI risk coverage and the evolving role of insurance in mitigating AI-related liabilities.

1. From the broker's perspective, how is the insurance market responding to the coverage of AI risk?

The insurance market, especially since the GenAI explosion, has started to monitor the rise of new risks related to the development and use of AI solutions, both to anticipate the demands of insureds and to start to efficiently manage exposure across existing portfolios. The insurance market is still in an “observatory” stage. Other than for a very few cases, specific ad hoc AI solutions are not yet available.

Based on Marsh's global perspective, in the US and in selected European countries there’s more attention around the topic: insureds see the challenge that AI solutions bring, wondering how to transfer their AI residual risk to the insurance market and pushing insurers to deliver answers and propose solutions. So far, the Italian market – both from a supply and demand angle – hasn’t developed any meaningful initiatives. But we expect this to change in the near future with carriers looking at finding value-added solutions for their clients.

2. How is AI risk exposure transferred? Is it possible to rely on traditional products?

Currently, there’s only one ad hoc insurance product for AI risk, distributed by a leading player in the reinsurance market. Beyond this, clients looking for coverage can explore other established product lines such as Cyber, Professional Indemnity, Crime, Intellectual Property and Product Liability where typically claims and/or circumstances related to AI are not yet specifically excluded. Cover seems to be afforded on a “silent” basis: not affirmatively covered and not explicitly excluded. To give a few examples, if training data and input data can be captured by the model and leaked in the model outputs causing a data breach, the cyber policy could cover it; again, if a fraud is conducted using deepfake, the crime policy could cover it. Aiming to curb the level of uncertainty, AI affirmative endorsements on cyber and crime policies are very slowly being released. But at the moment this isn’t the norm.

3. What risks do you think are potentially insurable with an AI policy?

Insurability is a complex topic, as it depends on the exposure, on the business conducted and on the insured's risk appetite. Depending on the situation, you could decide to cover first party damages – insuring the performance of self-built AI – or potential third-party liability profiles, either contractual or non-contractual. Depending on the business conducted by the insured, it might be relevant to cover risks from hallucination and false information, privacy infringement, intellectual property violations or unfair or biased output.

It goes without saying that a certain degree of tailoring is required to shape a product that fits the insured's needs.

4. Are traditional underwriting methods still relevant and applicable in the AI world?

They are still relevant, but in a partial way. We can compare it with the cyber risk underwriting process. Although it’s a complex and nuanced risk, the insurance market has settled on the use of questionnaires, sometimes combined with perimetral scanning or risk dialogues: as of now, it is a linear path. For AI risk, it may not be as straightforward. To quantify the risk, it will be necessary to identify the underwriting information on a case-by-case basis (deployer, user, type of AI involved) to be evaluated with data on model training and post-deployment controls.

The topic of quantifying damage in the event of a claim is also very complex: consider the case of an AI product provided to banks to recognize legitimate transactions from frauds. In this case, the provider would want to buy a policy to cover situations of underperformance of the product. To avoid difficulty in quantifying the loss, it may be necessary to set a threshold eg guarantee that the tool model will catch at least 99% of all fraudulent transactions, and if the AI fails to deliver as promised, the insurance company will pay.

5. Have you experienced the notification of any claims under AI policies or, anyway, related to damages caused by AI? If yes, which type of claims?

At Marsh, most of the claims we’ve seen involving the use of GenAI are in the domain of fraud. This refers to fraudulent transfer of funds obtained by creating the false belief in employees that they’re complying with legitimate requests from internal parties in the company. As of now, claims that fall in this category are generally notified under crime policies. GenAI is also used to refine phishing attacks (currently, one of the main vectors of ransomware), making them more credible and increasing the success rate.

6. What are your predictions for the near future?

The path will likely be the same as for cyber risk: eventually, insurers will need to quantify and monitor AI exposure in traditional insurance policies to the extent that it could represent a significant unexpected risk to their portfolios. To do so, the reinsurance markets and Lloyds of London might start imposing AI exclusions on cyber, professional indemnity, crime and other traditional products, creating a gap that will need to be filled. By that time, we expect AI-specific insurance products to be ready to perform, supported by a defined and replicable underwriting process and a consistently predictable loss quantification mechanism.