Add a bookmark to get started

7 de junho de 20244 minute read

Explainability of AI – benefits, risks and accountability

Part two

From a litigator’s perspective, issues around explainability provide fascinating queries regarding bias and ethics, what form of transparency is needed to make AI truly explainable and legal causation.

In respect of bias and ethics, we are already seeing claims around the globe where explainability or a lack thereof has influenced a court's findings. For instance in the USA, claims have been made in respect of AI fraud audit tools deployed by government (to make decisions categorising recipients of government benefits as having engaged in fraud) alleging that outputs were biased and that the decisions reached could not be explained.

Closer to home, claims have arisen regarding the use of automated facial recognition technology in a pilot project by the South Wales Police Force. The Court of Appeal held that such use was not "in accordance with the law" as far as it enshrines the right to respect for private and family life under the ECHR. Among other things, it was considered that reasonable steps had not been taken to investigate whether the technology had a racial or gender bias, as required by the public sector equality duty that applies in the UK.

In meeting regulatory transparency or explainability requirements, it is important to understand that one size does not fit all. The Royal Society’s "Explainable AI: the basics - Policy Briefing" recommends consideration of whether the form of explanation is most useful to those affected by the outcome of the system. For instance, what form of explanation is most useful to those affected by the outcome of the system? Is the form of the explanation provided accessible to the community for which it is intended and what processes of stakeholder engagement are in place to negotiate these questions?

AI and the question of explainability also has an interesting impact on potential liabilities and questions of causation. It is a central, and trite, concept of English law that generally speaking in order to recover losses for breach of duty and/or contract, the breach of the contract and/or duty needs to have caused those losses. The burden of establishing causation is on the Claimant. This makes things very difficult for a Claimant who alleges it has been wronged by the use of AI. To counter this, the proposed EU - AI Liability Directive, suggested that (in a consumer context) courts should apply a presumption of causality. This in effect reverses the burden of proof and puts the onus on the Defendant to prove that the action or output of the AI was not caused by them. It therefore emphasises the need for organisations to document how they are using AI technologies, including the steps that have been taken to protect individuals from harm.

The UK government’s White Paper on AI regulation, however recognises “the need to consider which actors should be responsible and liable for complying with the principles” but goes on to say that it is “too soon to make decisions about liability as it is a complex, rapidly evolving issue”. 

It remains to be seen how courts worldwide will approach issues of liability for harm arising from the use of AI models. While the default position is that the Claimant needs to establish causation, it remains to be seen whether the proposed EU approach will influence legislators and judges in other jurisdictions. For now at least, each case will remain very much fact dependent and we can foresee that courts will have to rely heavily on experts to unpick the AI models. This is not necessarily different to other complex IT related disputes, so for now it is to a certain extent business as usual, but we will be watching developments closely.

Print