|

Add a bookmark to get started

31 de octubre de 20247 minute read

White House issues "first-ever" National Security Memorandum on AI

New directives would have implications across a range of federal agencies

On October 24, 2024, the White House released the “first-ever” National Security Memorandum (NSM) on artificial intelligence (AI). The NSM directs the Pentagon and other US national security agencies to increase their adoption of AI technologies in a safe and responsible, but expedited, manner.

As is often the case during discussions of risks and opportunities related to AI, the NSM cites concerns about adversaries leveraging AI and the importance of maintaining America’s competitive advantage over key technologies.

Strengthening the US position in a global technology race

The NSM builds on recent AI initiatives by the White House, including President Joe Biden’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (Executive Order), signed in 2023. Among other mandates, the Executive Order called for the government to develop and implement strategies aimed at fostering innovation while mitigating the potential downsides of AI.

On the morning of the NSM’s release, White House National Security Advisor Jake Sullivan outlined the Memorandum’s three main lines of effort:

  • Securing American leadership in AI
  • Harnessing AI for national security, and
  • Strengthening international AI partnerships.

A classified annex to the NSM addresses additional sensitive national security issues in greater detail, including countering adversary use of AI that poses risks to US national security.

A holistic governmental approach to national security

Reflecting the whole-of-government approach envisioned by the Executive Order, the NSM directly addresses several key federal arms, including:

  • Secretary of State
  • Secretary of Treasury
  • Secretary of Defense
  • Secretary of Commerce
  • Secretary of Energy
  • Secretary of Health and Human Services
  • Secretary of Homeland Security

The NSM is also directed toward the Vice President, Attorney General, Directors of the National Intelligence and the Central Intelligence Agency, and several other top-level Cabinet and White House officials.

Material impacts to economic policy

White House National Economic Advisor Lael Brainard noted that the NSM has significant economic policy implications.

The NSM directs the National Economic Council to coordinate an economic assessment of the relative competitive advantage of the US private sector AI ecosystem. Brainard cited the need for expanded AI data centers and the corresponding necessity to accelerate construction of electrical generating and transmission infrastructure with an emphasis on non-fossil-fuel-based energy sources.

Training and recruiting a workforce to accomplish these goals is also expected to require significant public investment as well as partnerships with the private sector.

US to leverage key research institutes

The NSM formally designates the AI Safety Institute as “US industry’s primary port of contact” in the government, according to a White House Fact Sheet. The NSM goes on to outline procedures for the Safety Institute to partner with national security agencies, including the intelligence community and the Departments of Defense and Energy. The Institute is housed within the National Institute of Standards and Technology (NIST), an agency of the Commerce Department.

The NSM also doubles down on the National AI Research Resource (NAIRR), currently in its pilot stage, to ensure that researchers from universities, civil society, and small businesses can conduct technically meaningful AI research. NAIRR is under the National Science Foundation (NSF), in partnership with 12 other federal agencies and 26 non-governmental partners.

To further the goal of ensuring US leadership, the NSM lists as a top priority research toward improving the security and diversity of advanced chip supply chains, and also promoting the development of next-generation government supercomputers and other emerging technologies relevant to AI.

Realignment of national security and intelligence priorities

The NSM makes collection on foreign competitors’ operations against the US AI sector a top-tier intelligence priority. Key government entities will therefore be tasked with providing AI developers with cybersecurity and counterintelligence information.

In addition to promoting the effective use of AI systems in service to the national security mission, the NSM stresses the need to align these intelligence efforts with democratic values. It provides for guidance for responsible AI governance and risk management for use in national security missions, complementing the previous guidance issued by the Office of Management and Budget for non-national security missions. Agencies are expected to monitor, assess, and mitigate AI risks related to invasions of privacy, bias and discrimination, the safety of individuals and groups, and other human rights concerns.

Offensive cyber threats

The NSM empowers agencies to urgently address potential new cyberthreats emanating from advancements in AI. It requires the AI Safety Institute to pursue voluntary testing of at least two frontier AI models prior to their release to evaluate capabilities that might pose a threat to national security, including capabilities to aid offensive cyber operations.

The NSM also directs the NSA’s AI Security Center (AISC) to develop (within 120 days) the capability to perform rapid systemic classified testing of AI models’ capacity to detect, generate, and/or exacerbate offensive cyber threats.

The NSM goes on to require interagency publication of guidance concerning known AI cybersecurity vulnerabilities and threats as well as best practices for addressing them.

Cooperation with allies

Under the NSM, the US intends to collaborate with allies and international partners to establish a stable, responsible, and rights-respecting governance framework, aiming to ensure that AI technology is developed and used in ways that adhere to international law while protecting human rights and fundamental freedoms.

Building a safe, secure, and trustworthy AI environment

While national security is the main focus of the NSM, it also aims to foster and build “the safety, security, and trustworthiness of artificial intelligence.”

The portion of the NSM addressing risk-based use cases extends beyond government and carries potential implications for the industry, comparable in intent and effect to the European Union’s AI Act, which entered into force in August 2024.

The newly mandated Framework to Advance AI Governance and Risk Management in National Security” (the Framework), for example, would include, among other items to be specified within the Framework, guidance on AI activities that are “high-impact” and require minimum risk management practices, including for high-impact AI use that affects federal government personnel.

The NSM outlines these uses as follows:

“Such high-impact activities shall include AI whose output serves as a principal basis for a decision or action that could exacerbate or create significant risks to national security, international norms, human rights, civil rights, civil liberties, privacy, safety, or other democratic values.

Each agency is directed to issue guidance on high-impact use cases and “shall ensure privacy, civil liberties, and safety officials are integrated into AI governance and oversight structures.”

A view into government thinking

While the NSM is not enforceable through law, the guidance is yet another indicator of how the White House is thinking about and approaching AI. As many of these government arms are expected to be involved in future regulation of AI at a potentially federal level, the guidance is an important resource in understanding how regulators may approach regulating the use of AI by private-sector organizations.

DLA Piper is here to help

DLA Piper’s team of lawyers, data scientists, and policy advisors assist organizations in navigating the complex workings of their AI systems to facilitate compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impacts on industry across the world.

As part of the Financial Times’s 2023 North America Innovative Lawyer awards, DLA Piper was conferred the Innovative Lawyers in Technology award for its AI and Data Analytics practice.

For more information on AI and the emerging legal and regulatory standards, please visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI ChatRoom series.

For further information or if you have any questions, please contact any of the authors.

Print