Add a bookmark to get started

10 December 20247 minute read

CFTC issues advisory on use of AI in regulated markets

The US Commodity Futures Trading Commission (CFTC) issued a nonbinding staff advisory, CFT Letter No. 24-17, to all CFTC-registered entities and registrants concerning the use of artificial intelligence (AI) in CFTC-regulated markets.

The advisory, released on December 5, 2024, complements the CFTC’s focus on building a forward-looking AI culture. The advisory highlights the staff’s awareness of the “potential risks and benefits” associated with the use of AI in derivatives trading and reflects its understanding of “current and potential AI use cases.”

The advisory reminds entities of their compliance obligations under the Commodity Exchange Act (CEA) and other CFTC regulations. It also emphasizes AI governance, risk assessment, system safeguards, and regulatory compliance in AI adoption, both for internally developed and third-party AI tools.

Key components of the advisory and our takeaways are summarized below.

AI and the CEA and CFTC regulations

The advisory breaks down certain requirements under the CEA and CFTC regulations that are likely to be impacted by AI use. While not exhaustive, the advisory provides a list of existing statutory and regulatory requirements that will potentially be implicated by CFTC-regulated entities’ use of AI. The staff cautions CFTC-registered entities and registrants to continue complying with all applicable CEA and regulatory requirements, even as they implement AI into their business practices. Anticipated AI use cases identified in the guidance are categorized by the type of registered entity (ie, Designated Contract Markets, Swap Execution Facilities, Swap Data Repositories, and Derivatives Clearing Organizations) and include:

  • System safeguards: Detection and response to cyber intrusions, identifying cyber vulnerabilities, and hardening defenses

  • Member assessment and interaction: Review of clearing members’ compliance with certain CFTC rules and communications with members

  • Settlement: Supporting derivative clearing organization settlement processes as well as validating data, mining for abnormalities, or identifying failed trades

  • Risk assessment and management: Calculation and collection of initial and variation margin for uncleared swaps

  • Compliance and recordkeeping

  • Customer protection

The advisory highlights the importance of thorough risk management processes, particularly in AI systems used for trade execution, market surveillance, and other market functions. This applies whether the AI solutions are developed internally or procured from third parties. The advisory also stresses the value of robust system safeguards to mitigate vulnerabilities that may lead to cybersecurity risks, algorithmic errors, and market disruptions caused by automated decision making.

Moreover, the advisory underscores the responsibility of regulated entities to maintain transparency, fairness, and accountability in their AI systems. This includes documentation, testing, and regular audits to ensure compliance with legal and ethical standards.

The CFTC has framed the advisory as a first step to potential policies or regulations in the future, and the advisory itself acknowledges that the staff will continue to reevaluate the advisory and develop future guidance, recommendations, or regulations.

Key takeaways

While AI has the potential to transform the financial services industry, regulated entities cannot forget the basics.

As the advisory indicates, it is the responsibility of the CFTC-registered entities and registrants to continue to operate consistently with their Core Principles, regardless of how AI is incorporated into their business. This includes documentation, testing, audits, and compliance with the CEA and other CFTC regulations.

While regulated entities have been engaging in these activities for years, the introduction of AI comes with its own unique risks that must be accounted for:

  • Specific AI governance: Regulated entities are well versed in traditional governance. Unlike traditional governance, which typically focuses on human oversight and established processed, AI governance must be nimble, to account for the rapid changes in technology, and also address the complexities of automated decision making, including data integrity and algorithmic bias. This requires a more proactive and dynamic approach than classic governance programs, often with multiple layers that can both strategically guide the organizations use of AI but also mitigate the day-to-day risks associated with AI. A prudent AI governance program will also balance the entities’ compliance with the CEA and other CFTC regulations with emerging AI-specific regulations.

  • Testing: Use of AI can introduce or perpetuate bias that can increase both legal, regulatory, and reputational risk. To ameliorate these risks, organizations must rely on testing and validation procedures, which are far from straightforward. From a technical perspective, developing accurate bias testing methodologies that are fit for purpose is a complex and nuanced endeavor; and validating that the methods and techniques used to detect bias are themselves not introducing or perpetuating bias is of critical importance.

  • Red teaming: Red teaming is rapidly becoming the standard approach to ensuring technologies based on generative AI and large language models are trustworthy, safe, unbiased, and legally compliant. The primary goal is to identify and mitigate potential failures or exploitations of AI systems in order for developers to implement more robust solutions. To identify potential risks, the “red team” plays the role of an adversary, attempting to exploit, deceive, or otherwise outmaneuver the AI system. Red teaming is expected to account for the specific harms likely to arise from the use case, ie, violation of the CEA and other CFTC regulations.

Unlike some other compliance program guidance – the US Department of Justice’s, for example – the advisory provides specific guidance related to the requirements of the underlying systems. This is because CFTC regulations govern those systems and the underlying market data, making the advisory more detailed in potential use cases and corresponding risks.

The advisory acknowledges the rapidly evolving nature of AI and challenges posed by adapting regulatory frameworks. The CFTC is continuing to evaluate the need for future regulation, guidance, or other action, and the advisory signals the CFTC’s commitment to fostering innovation while safeguarding market stability.

The underlying takeaway is that AI tools are not a substitute for basic compliance tools. Regulated entities should expect AI to be a topic of discussion during CFTC examinations and routine oversight activities, and that the CFTC will utilize AI to further its enforcement mission.

DLA Piper is here to help

DLA Piper’s multidisciplinary team of lawyers and data scientists assist organizations in navigating the complex workings of their AI systems to ensure compliance with current and developing regulatory requirements.

We continuously monitor updates and developments arising in AI and its impacts on industry across the world. Our lawyers also advise companies on the compliance obligations under the CEA and other relevant laws. DLA Piper is positioned to help companies ensure that their adoption and use of AI maintains compliance with existing industry regulations.

As part of the Financial Times’s 2023 North America Innovative Lawyer awards, DLA Piper was conferred the Innovative Lawyers in Technology award for its AI and Data Analytics practice. Our White Collar team has been named to the Global Investigations Review GIR 30, notable law firms for complex multijurisdictional investigations.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI ChatRoom series.

For further information, please contact any of the authors.

Print