Add a bookmark to get started

Pitched roof
6 November 20249 minute read

Where could AI risks hit liability insurers? A European perspective

There has been much commentary on how the development of AI will transform businesses, including insurance companies that are making use of that technology in their underwriting and claims handling. In this short article we are looking at the risks the policyholders are exposed to, and as a result, to what extent liability insurers, should prepare for potential claims associated with AI-related failures (whilst not the only product lines impacted, we look particularly here on including providing Directors & Officers (D&O) and Errors & Omissions (E&O) insurance).

The emergence of large language models (LLMs) like ChatGPT has expanded the potential scope of business application of machine learning tools. Consequently, as policyholders use AI applications in their business operations, the scope of risks faced under professional and financial lines policies has increased.

Below, we discuss some relevant case law and consider coverage issues as they currently present themselves – all with an eye on the ever-evolving use of AI and the increasing prevalence of LLMs.

 

AI-related Risks

While not at all exhaustive, we have identified a couple of key risks we see arising with the increasing use of AI tools.

  • The biggest AI-related risks include misinformation/disinformation, such as manipulated and falsified information (including deep fake videos and voice cloning) which can facilitate security breaches or enable cyber-attacks that might lead to claims in any sector.
  • In the realm of financial lines, the use of AI models for decision-making can potentially implicate D&Os, especially when, for example, the decisions relying on AI-output, are complex or objectively would lack clear justification; or a decision that relied on, or was influenced by AI, is simply false (if eg based on hallucination), or below the required professional standards.
  • Another risk associated with LLMs involves copyright infringement. If an LLM has been primarily trained on copyrighted material, there’s a possibility of unintentional infringement. Content creators and publishers could potentially make claims against users of AI-generated content that reproduces their protected work. In the US, there are already over a dozen of AI-related copyright infringement claims.
  • Further, companies face the danger of misrepresentation claims if overstating their AI capabilities or downplaying the risks posed by these models to their business strategies. (There have already been several cases regarding AI-related misrepresentations, mainly in the US.)

 

Case Law – AI washing

In the US, there is a growing trend of AI-related securities litigation, where companies are accused of overstating the capabilities of their AI technologies to attract customers and investors, a practice referred to as “AI washing”. We discuss the US experience here, as we consider it will be indicative of how similar complaints may arise in the EU and beyond.

Recently, the U.S. Securities and Exchange Commission (SEC) took action against a number of firms for making false and misleading statements about their AI utilization. In these cases, the firms were found to have made exaggerated claims about their AI capabilities, which they did not actually possess. The SEC emphasized the importance of accurate disclosures, especially as more investors are drawn to AI-driven investment strategies.

The SEC has targeted a variety of business sectors – examples include a cloud-based lending platform, a data engineering company and a US real estate/house pricing – highlighting that scrutiny is on all types of companies leveraging AI in their products and services.

 

Liability for AI-powered products

As well as risks faced by those utilising AI tools, the manufacturers of the AI-enhanced products themselves could also be targeted if their products cause harm to their customers or to third parties. For example, if an autonomous car’s AI system makes a faulty decision leading to an accident, the manufacturer of the car and/or of the subcomponent incorporating the AI-enhanced self-driving application, could face claims to hold them liable for the injuries or damages caused.

As AI is a complex technology that can evolve and act unpredictably, it increases the difficulty in determining fault. It’s worth noting though that many legal systems (including in the European Union and England & Wales) impose strict liability rules, such that proving fault is not required to obtain damages against the manufacturer of a defective product. Of course, where a manufacturer is targeted for liability, or indeed actually found to be liable, for harm caused, notifications and claims can be expected to product liability insurance policies.

 

Will regulation affect AI-related claims?

Although AI-specific regulation is still in its infancy, there is no regulatory vacuum in view of the numerous technology-neutral rules around data protection which ensure transparency in data usage and processing and often include the necessity for human oversight in automated decision-making.

Whilst regulators in the UK are working on sector-specific regulation, the EU AI Act aims to establish a uniform legal framework for AI development, which will be applicable to all sectors whether regulated or not. The EU AI Act takes a tiered approach, based on AI system risk levels, with most regulatory protection applying to high-risk systems, imposing specific responsibilities on AI developers and deployers.

Under the EU AI Act, breaches by AI developers and deployers might lead to regulatory intervention, and thus represent a claims risk if policyholders become the subject of regulatory investigations or fines in relation to their AI product. Should a developer be found to be in breach of its regulatory obligations, this may however lead to follow-on actions from affected third parties claiming compensation under applicable contractual or other liability regimes.

Beyond the AI-regulatory framework, the European Union is also working on reforming the liability rules applicable to AI solutions and AI-enhanced products in order to address the specific challenges they pose. The key initiatives include the proposed AI Liability Directive, which is concerned with ensuring accountability for incidents involving AI systems. The new legislation aims at making it easier for victims to claim compensation, particularly by addressing the difficulty of proving fault when complex technologies such as AI are involved.

The efforts led by the European Union also include a reform of the existing liability regime for defective products, which was adopted 40 years ago, now partly outdated. The new Product Liability Directive, which has been adopted on 10 October 2024, aims at adapting the existing regime to new technologies. The text will expand the scope of liability to take into account digital products such as software, and new sources of risks resulting from cybersecurity breaches or the autonomous behaviour and self-learning capabilities of AI solutions.

This new legislation will make claims from victims of AI-related incidents easier and developers of AI solutions, as well as their liability insurers, should expect a higher exposure in the coming years as a result.

 

Type of Loss – Type of Coverage

Professional liability policies are intended to cover losses associated with the (alleged) negligent provision of professional services, and typically insure the consequences of human errors. But when the use of an AI tool creeps in and contributes to, or causes an error, the situation becomes more complex, as it is only the people and companies using the AI which are insured the policy likely will not insure errors in the AI tool itself. It might often be difficult to separate the AI activity from human action, considering AI is used as a tool.

As explained above, under some circumstances, an AI developer’s liability could be triggered, even if no specific human error or negligence has been identified, for example if a strict liability regime (such as product liability) is applicable. In such an event, the developer’s professional liability policy may be triggered, subject to the concerned policy’s specific terms and conditions.

Problems will arise when it is not clear whether it was a malfunctioning of the product, or its negligent use that has caused the loss. So, there is likely a grey area between the trigger of the professional liability and the product liability cover.

Insurers might be particularly concerned about ‘silent AI’ cover (akin to ‘silent cyber’ cover), ie unintended cover for AI-related events, if a decision based on or influenced by AI leads to a loss. Determining coverage will depend on the specific policy terms, the nature of the error, and whether the AI tool was explicitly covered or separately excluded.

Against this background, as AI use will become more and more prominent in every industry, liability insurers will need to ask insureds more questions about their use of AI tools, and, in the event of an issue arising, about the surrounding processes in order to determine whether there has  been a ‘human error’ that might be covered.

 

Exclusions

Currently, specific exclusions for AI-related losses are not commonly seen in liability policies. If a policy expressly covers the error or omission in question and there is no AI-specific exclusion, the involvement of AI should not in and of itself negate coverage – subject always to a fulsome consideration of the terms, conditions and exclusions in the policy more generally.

As insurers may hesitate to cover losses for which they did not collect a premium, carriers will need to determine where an AI exclusion might fit within policy endorsements. Given that AI is still relatively new, the precise language for such exclusions is still evolving. Meanwhile, underwriters may consider segregating AI-related risks and charging separately for such risks to enhance clarity. Notably, a few insurers have designed specific insurance products covering AI risks but most carriers appear to be in an observation phase at this stage.

 

Conclusion

The AI landscape is evolving rapidly. At this stage, insurers can actively participate in developing industry standards for AI risk management and collaborate with regulators to ensure alignment with evolving guidelines.

Insurers should regularly review and adapt policies to address emerging risks. Many risk carriers continue to invest time and efforts to better understand their policyholders’ use of AI and to competitively price and underwrite AI risks. In that context, engaging with clients and learning from claims experiences will, as ever, help to refine policy offerings, as well as informing where risk mitigation services might be offered to policyholders. This includes guidance on AI governance, model validation, and ethical considerations.

 

Print