|

Add a bookmark to get started

11 de julio de 20233 minute read

Canadian courts respond to generative AI by issuing new practice directions on AI use in court ‎submissions

The use of generative artificial intelligence (AI) tools in litigation has grabbed the attention of the courts. Recently, the Court of the King’s Bench of Manitoba and the Supreme Court of Yukon have issued practice directions regarding the use of AI in court submissions. These courts are concerned with managing the risk of the use of AI tools and have indicated a desire for counsel to be transparent when using AI tools.

Specifically, the Court of King’s Bench of Manitoba states “when artificial intelligence has been used in the preparation of materials filed with the court, the materials must indicate how artificial intelligence was used.” The Supreme Court of Yukon is requiring counsel (and self represented parties) who rely on “artificial intelligence (such as ChatGPT or any other artificial intelligence platform) for their legal research or submissions in any matter and in any form before the Court, they must advise the Court of the tool used and for what purpose”.

Both directions note concerns about the reliability and accuracy of information generated by AI. The Supreme Court of Yukon makes express reference to ChatGPT in the context of legal research, obliquely referring to the recent incident in the District Court of the Southern District of New York, where a lawyer relied on non-existent legal precedents hallucinated by ChatGPT. 

In this widely reported incident, ChatGPT, a generative AI platform, was used incorrectly for legal research purposes. User misunderstandings about the appropriate application of AI technology as well as opacity related to AI’s decision-making process (so-called “black-box” decisions) contribute to the potential for its accidental misuse.

The wording of these directions is broad and open-ended, in order to capture both expected and unforeseeable uses of this rapidly evolving technology. These directives are an attempt to ensure that any AI uses that could affect the integrity of proceedings are brought to the attention of the court.

However, the practice directions do not provide any definition of “artificial intelligence” and do not provide any distinction between output generated entirely using a large language model, for example, as compared to output generated by a lawyer who has used some kind of tool to assist with document discovery, production, or legal research. Lawyers already use numerous AI tools in preparing cases and making submissions and do not disclose their use. Many lawyers are also unaware that they are using AI tools embedded into software they regularly use. Many AI tools are not generative and are not akin to large language models such as ChatGPT. The practice directives do not make these distinctions clear. They also do not indicate how the Courts intend to respond to or address the disclosures when they are made.

Given these issues, it is best to see these practice directions as “directional” and indicating an unease with generative AI and high-profile large language models due to recent news stories. However, it will take time for the courts to fully grapple with the implications of the increasingly sophisticated use of AI in the law and to develop meaningful protocols and safeguards based on a principled approach.

It is anticipated that more courts in Canada will follow suit and issue directions related to AI use in proceedings.

Print