|

Add a bookmark to get started

5 de junio de 20241 minute read

Legal red teaming: A systematic approach to assessing legal risk of generative AI models

Generative artificial intelligence (GenAI) is gaining substantial traction across various domains. Unlike traditional, “narrow-purpose” AI, which is deterministic in nature once trained, GenAI is nondeterministic and excels in creating, or generating, new content, such as text, images, and video.

The non-deterministic nature of GenAI means that it can create materially different outputs each time, even with the same input. This non-deterministic nature, however, means that traditional model testing and validation methods are not optimal or effective. Instead, organizations must rely on other options, such as red teaming, to fill the assessment gap.

Red teaming, which includes proactively attacking a system to identify vulnerabilities, is now a common practice in cybersecurity – and legal red teaming is often a valuable strategy in assessing and mitigating risks associated with GenAI technologies from a non-technical standpoint.

In our latest white paper, we explore how legal red teaming can serve as a valuable methodology for creating safer and more responsible GenAI systems.

Print