Add a bookmark to get started

Laptop_computer_light_and_key_S_0834
22 May 20244 minute read

Explainability of AI – benefits, risks and accountability

Part one

As far back as 2018, the House of Lords recognised that “the development of intelligible AI systems is a fundamental necessity if AI is to become an integral and trusted tool in our society” (House of Lords (2018) AI in the UK: ready, willing and able? Report of Session 2017 - 19. HL Paper 100) and the EU has now taken this one step forward via the EU Act, which states that “High‐risk AI systems shall be designed and developed in such a way as to ensure that their operation is sufficiently transparent to enable deployers to interpret a system's output and use it appropriately” (art .13).

The Recitals to the AI act also recognise that “the exercise of important procedural fundamental rights, such as the right to an effective remedy and to a fair trial as well as the right of defence and the presumption of innocence, could be hampered, in particular, where such AI systems are not sufficiently transparent, explainable and documented”.

Explainability, which can broadly be defined as the capacity to express why an AI system reached a particular decision, involves a multi-factorial analysis that goes beyond simply how the model works, and involves looking at why the model was created, who creates the model and what the data sources are.

In certain contexts, such as in high-risk AI systems and in order to comply with regulatory obligations, explainability is a base requirement. However, beyond such compliance issues there can be commercial benefits in having explainable AI. For instance, in McKinsey’s “The State of AI in 2021” it was identified that "Companies seeing the biggest bottom‐line returns from AI - those that attribute at least 20 percent of EBIT to their use of AI - are more likely than others to follow best practice that enable explainability”.

There may be good reason for this impact on the bottom line. If you know how your system works, and how it uses data, it is easier to assess where things could be improved, or where things are going wrong. This will ultimately result in a better product being brought to the market.

It also safeguards against potential bias by providing an understanding of where bias may occur so steps can be taken to rectify the model. Allegations of bias are commercial kryptonite and can influence the whole perception of an organisation. Further, by being in a position to explain the model you may be able to show that there is in fact fairness “baked in” into it and thus illustrate good corporate citizenship.

Finally, explainability engenders trust with customers. Without the ability to explain the systems you are asking for blind trust from users. Even in the strongest relationships, this is a big ask.

There are however points to consider when striving for explainability.

There are nefarious actors out there who may, if a system is well known and understood, try and game the system. This is not a reason in itself to depart from explainable AI, but it indicates that guardrails may need to be put in place.

In addition, some AI systems are simply too complex to understand, even for experts. In a litigation context, we are used to dealing with complex IT related issues where it can be difficult to pinpoint the causes of issues. However, AI can make this even more difficult.

The fact that AI can be explained may also result in a false sense of security for users regarding the risks associated with AI and result in overconfidence or overreliance on it.

Crucially, explainability alone cannot answer questions about accountability and it should not be a sticking plaster to deal with difficult questions regarding accountability. Human scrutiny is still needed.

On a more philosophical level, not all human decisions are easily explainable and there is a school of thought that it is illogical to seek greater explainability of, for instance, LLMs which mimic human behaviour than what we expect from human decision making. These philosophical considerations and the debates stemming from them are not merely academic. They are likely to have a significant impact on policy making over the years ahead.

In the second part of this mini-series we will consider the issue of explainabiltiy of AI from a disputes perspective focusing on issues regarding bias, ethics, transparency and causation.

Print