Add a bookmark to get started

30 July 20245 minute read

NIST releases its Generative Artificial Intelligence Profile: Key points

On July 26, 2024, the National Institute for Standards and Technology (NIST) released the Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile (GenAI Profile) pursuant to President Joe Biden’s Executive Order on Safe, Secure, and Trustworthy Artificial Intelligence.

The GenAI Profile is designed as a companion resource to NIST’s AI Risk Management Framework (AI RMF), released in January 2023, and is intended to function as a technology-specific implementation of the AI RMF aimed at generative artificial intelligence (GenAI).

As GenAI can be used across various contexts, the profile is sector agnostic and is designed to help organizations integrate trustworthiness considerations into the design, development, use, and evaluation of GenAI systems.

Distinct from previous guidance produced by NIST, the profile outlines the risks that are unique to GenAI, suggests corresponding actions to manage these risks, and summarizes operational considerations for effective risk management.

The GenAI Profile identifies and discusses 12 primary risks that are unique to, or exacerbated by, GenAI:

  • Chemical, biological, radiological, or nuclear (CBRN) information or capabilities. GenAI could facilitate access to, or synthesis of, information related to chemical CBRN weapons.

  • Confabulation. The production of false or misleading content by GenAI may induce users into believing incorrect information.

  • Dangerous, violent, or hateful content. The creation of inciting, radicalizing, or threatening content by GenAI may promote violence or illegal activities.

  • Data privacy. Use and training of GenAI systems may lead to leakage, unauthorized use, or de-anonymization of personal data during the training and use of GenAI systems.

  • Environmental impacts. Training and operating GenAI systems may lead to high energy consumption and carbon emissions.

  • Harmful bias or homogenization. Societal biases and disparities may be perpetuated or amplified through use of GenAI systems, leading to further discrimination and unfair treatment.

  • Human-AI configuration. Inappropriate use or interactions between humans and GenAI systems may lead to human-centric risks, including over reliance and automation bias.

  • Information integrity. GenAI could generate and disseminate false or misleading information at scale, potentially eroding public trust.

  • Information security. Vulnerability to cyberattacks, such as data poisoning and prompt injection, may potentially compromise GenAI systems and their outputs.

  • Intellectual property. Use of protected materials in GenAI training and inputs may lead to infringement of copyrights and other intellectual property rights.

  • Obscene, degrading, and/or abusive content. GenAI may generate illegal, abusive, or degrading imagery, including synthetic sexual abuse material and nonconsensual intimate images unless proper guardrails are put in place.

  • Value chain and component integration. Integration of nontransparent or third-party components and data may lead to diminished accountability and the possibility of potential errors across the AI value chain.

In response to the outlined risks, the GenAI Profile provides several suggested voluntary actions that may be adopted, subject to internal organizational considerations, which can be used to operationalize mitigations and reduce potential for harm. This includes establishing protocols for red teaming GenAI systems, implementing incident response teams that react to emergent harms – such as failing to meet minimum bias and accuracy thresholds – and the integration of GenAI lifecycle considerations into wider AI governance frameworks.

These actions closely align with the four core functions outlined in the wider NIST AI RMF – ie, “Govern, Map, Measure, and Manage" – which are also generally regarded as leading industry best practices.

Because these suggested actions aim to help manage organizational risk associated with GenAI across all sectors and industries, they are wide ranging and may require experience in organizational governance, AI system design, and AI system testing, among others.

Looking forward, the US Department of Commerce and NIST are expected to continue to release guidance and publications intended to help improve the safety, security, and trustworthiness of artificial intelligence systems.

Beyond the GenAI Profile, NIST plans to release multiple other publications this year, including approaches to reducing risks posed by synthetic content created or altered by AI and guidelines for managing the risk of misuse for dual-use foundation models.

DLA Piper is here to help

DLA Piper is prepared to assist organizations in navigating emerging industry standards that require deep cross-functional experience, such as the NIST GenAI Profile.

DLA Piper’s AI and Data Analytics practice is a cross-functional team of attorneys, data scientists, statisticians, and software developers. Our technical team, including combined lawyer-data scientists, is internal to DLA Piper and working under the direction and supervision of counsel, maximizing privilege while AI systems are investigated and remediated.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

For further information or if you have any questions, please contact any of the authors.

Print