undefined
|en

Add a bookmark to get started

Global Site
Africa
MoroccoEnglish
South AfricaEnglish
Asia Pacific
AustraliaEnglish
Hong Kong SAR ChinaEnglish简体中文
KoreaEnglish
New ZealandEnglish
SingaporeEnglish
ThailandEnglish
Europe
BelgiumEnglish
Czech RepublicEnglish
HungaryEnglish
IrelandEnglish
LuxembourgEnglish
NetherlandsEnglish
PolandEnglish
PortugalEnglish
RomaniaEnglish
Slovak RepublicEnglish
United KingdomEnglish
Middle East
BahrainEnglish
QatarEnglish
North America
Puerto RicoEnglish
United StatesEnglish
OtherForMigration
5 June 20241 minute read

Legal red teaming: A systematic approach to assessing legal risk of generative AI models

Generative artificial intelligence (GenAI) is gaining substantial traction across various domains. Unlike traditional, “narrow-purpose” AI, which is deterministic in nature once trained, GenAI is nondeterministic and excels in creating, or generating, new content, such as text, images, and video.

The non-deterministic nature of GenAI means that it can create materially different outputs each time, even with the same input. This non-deterministic nature, however, means that traditional model testing and validation methods are not optimal or effective. Instead, organizations must rely on other options, such as red teaming, to fill the assessment gap.

Red teaming, which includes proactively attacking a system to identify vulnerabilities, is now a common practice in cybersecurity – and legal red teaming is often a valuable strategy in assessing and mitigating risks associated with GenAI technologies from a non-technical standpoint.

In our latest white paper, we explore how legal red teaming can serve as a valuable methodology for creating safer and more responsible GenAI systems.