Add a bookmark to get started

Abstract pink lights
11 October 202412 minute read

Minimizing AI risk

Top points for compliance officers

More than ever, artificial intelligence (AI) is being implemented as a powerful tool to improve our lives and businesses. But with its benefits comes a host of risks – and regulators are homing in on its use as a tool for illegal activities.

Federal agencies have warned compliance officers that they must be aware of the risks associated with AI and develop effective compliance programs to mitigate them.

In this issue of Practical Compliance, we set out key steps compliance officers can take to identify and avoid business risks presented by the use of AI technology.

Identifying AI risks

Identifying AI risks requires companies to understand both the regulatory landscape in which they are operating and how their employees are leveraging AI across the business.

Some potential issues arising from companies’ use of AI include:

  • Facilitation of corporate crime resulting in higher penalties. Although AI can be used to improve business operations and efficiency and enhance compliance-related activities, such technology can also be misused to assist with corporate crimes such as price fixing, money laundering, fraud, bribery, or market manipulation. DOJ has signaled that it will seek higher penalties for instances in which AI was deliberately misused to make crimes “significantly more serious.” In a speech made at the University of Oxford in February 2024, US Deputy Attorney General Lisa Monaco noted, “The US criminal justice system has long applied increased penalties to crimes committed with a firearm . . . . Like a firearm, AI can also enhance the danger of a crime.”[1]  She also announced that DOJ prosecutors will now be evaluating, as part of DOJ’s guidance on Evaluation of Corporate Compliance Programs, whether companies have effectively assessed and managed their AI-related risks. 

  • Cybersecurity and data theft risks. While AI can significantly improve cybersecurity tools to detect attacks, identify and flag suspicious emails, and analyze massive amounts of data rapidly, it can also be seen as a double-edged sword. Bad actors can leverage AI for various purposes including data breaches, malware, ransomware, data loss, and theft, to name a few.  In fact, employees could also innocently run afoul of data privacy laws and protections of proprietary and confidential information by feeding company data from sensitive documents to AI tools, which do not guarantee privacy.  Generative AI intakes and produces large amounts of data in the form of text, videos, images, and code.  Companies operating in regulated industries such as healthcare or finance face heightened risks as they must safeguard protected data, such as personally identifying information that is subject to heightened regulatory protections.

    Theft of Americans’ sensitive personal data has been a significant concern of top US officials.  As discussed in our client alert, President Joe Biden issued an Executive Order in February 2024 seeking to restrict the sale of American data to China, Russia, Iran, North Korea, Venezuela, and Cuba to prevent those countries from accessing personally identifiable information for purposes of blackmail, surveillance, and the misuse of AI to target Americans. DOJ quickly followed with an announcement signaling their focus on the safety of Americans’ personal data in the face of powerful AI tools.

    Additionally, there is fierce international competition in the race to dominate AI technology, which has major commercial and security implications. DOJ leaders have announced heightened concerns about foreign adversaries harnessing AI technologies, noting that companies must safeguard their own trade secrets and proprietary information. For example, in March 2024, DOJ charged a former Google software engineer with four counts of theft of trade secrets after he was given access to confidential information about the company’s supercomputing data centers and allegedly began uploading hundreds of files into a personal account while simultaneously working for two Chinese tech companies.[2]

  • Ensuring compliance with a complex regulatory framework. The complex regulatory frameworks applicable to AI are rapidly evolving, which presents compliance risks as companies will need to stay abreast of and navigate the various requirements of numerous regulations. For instance, on July 12, 2024, the European Union (EU) published Regulation EU 2024/1689 (AI Act), which became the world’s first comprehensive AI regulation. This regulation has an international impact, not only for companies that are subject to its regulations, but also for other jurisdictions that are using the regulation as a model to develop their own. The Act imposes various obligations depending on the risk category assigned to the AI system, and failure to comply may result in heavy penalties, including substantial fines.

    Additionally, on July 31, 2024, the Senate Committee on Commerce, Science, and Transportation passed a slate of ten legislative measures on key AI-related issues. In the US alone, various enforcement agencies – the Federal Trade Commission, Consumer Financial Protection Bureau, DOJ, Federal Communications Commission, and the Securities and Exchange Commission have all made statements about restrictions over AI using existing legal authorities. Several states have also already enacted comprehensive privacy legislation that can regulate AI, such as the California Privacy Protection Act and the Biometric Information Privacy Act in Illinois. Further, just last week California’s Governor Gavin Newsom signed a raft of AI regulations ranging from safety measures, consumer transparency measures, reporting requirements, clarification of privacy safeguards, protections for performers and deceased celebrities, and election integrity measures. Failure to abide by state-specific privacy laws can expose companies to risks such as penalties, fines, litigation, and state attorney general enforcement actions. As companies begin navigating these complex regulatory schemes, they will need to stay abreast of all applicable regulations to avoid the associated risks with violating such laws.

  • Spread of disinformation. There are numerous ways AI can be used to spread disinformation that can cause serious risks to companies, including the ability to generate false information and fake content in vast quantities, and the overall inability to determine whether content originates from a human or a machine. At a basic level, bad actors can easily use AI to create content containing disinformation and spread falsehoods about a company across the internet at record speeds that could mislead stakeholders and the public, which can negatively impact the company’s financial performance.  The content created by AI is difficult to fact-check, can be incorrect, and can suffer from inherent bias from the underlying data.  Generative AI can even hallucinate and produce content that is not based on existing data. Companies should carefully test and evaluate their AI content and can use methods such as red teaming, a common practice in cybersecurity to proactively attack a system to identify vulnerabilities.  This approach tests the strength and resilience of a system by simulating threats discussed at length in our white paper.

  • AI washing.  The SEC and DOJ have issued warnings to public companies against “AI washing,” or making unfounded claims regarding their AI capabilities.  The public has been advised of an increase in investment frauds involving the purported use of AI and warned against investing in schemes that claim reliance on AI-generated algorithms that promise high returns.  Companies should pay close attention to the claims they make in their public filings about their AI capabilities.

  • Identifying AI risks by industry. Of course, companies must also navigate AI risks associated with their industries. For example, the healthcare and finance industries are subject to strict regulatory guidelines due to the sensitivity of the data being stored and how the AI models can be used. Understanding how AI is being used in a particular industry is not only important from a compliance perspective, but it also allows companies to stay up to date on what competitors are doing in this area. Companies are looking to integrate AI into all aspects of business, including customer service, marketing, IT, and legal. Monitoring industry use of AI can allow companies to quickly identify what use fits their organization and how best to implement it.

Steps to mitigate AI risks

Companies may be overconfident in their internal assessments regarding their exposure to AI risks. As AI continues to develop and its use becomes more widespread, it is important to not only be aware of the associated risks, but to establish practices and procedures to mitigate them.  As noted, this is now an expectation by DOJ of corporate compliance programs.

There are several steps companies can take to protect themselves from the risks posed by AI technology. The first is to understand all use cases of AI technology within the organization. This includes knowing where the technology operates and what data is used to train the models. This understanding will then act as a baseline for creating an AI compliance framework tailored to the specific needs within the organization and its industry.  

A refined compliance framework will contemplate the organization’s exposure to AI risks and maintains a system to proactively monitor and test the technology for accuracy, bias, and hallucinations. It will identify and alert when AI-related misconduct occurs using an internal mechanism that detects a breach or abuse. However, companies are also encouraged to include regular intervals of human review and oversight instead of relying entirely on automation – for example, through required systems checks and whistleblower hotlines. Employee training programs can also help the larger company understand the risks of AI misuse, specifically in the context of employees’ roles. Notably, organizations are encouraged to ensure their board is comprised of at least one individual who has AI expertise and understands the key risks associated with the technology in the relevant industry. Many companies have accomplished this by appointing a Chief AI Officer (CAIO) to monitor the company’s use of AI technology and maintain its AI compliance framework. Alternatively, an AI expert can be made available for the board’s consultation. Adopting this framework and tailoring it to an organization’s needs can mitigate a company’s potential exposure to AI risks. 

In addition to maintaining effective corporate governance, a robust compliance framework can also serve as a defense if AI misconduct materializes, requiring the company to answer to the government. As recently as September 23, 2024, DOJ announced its revisions to the Evaluation of Corporate Compliance Programs (ECCP),[3] which are intended to guide DOJ prosecutors in assessing a company’s ability to maintain a compliance framework that can identify and mitigate the risks associated with the misuse of AI. The revisions provide a number of expectations for companies utilizing AI and other emerging technologies, including to train employees on the use of AI and maintain controls to ensure the trustworthiness and reliability of the technology. Companies responding to an AI-related breach or abuse should prepare to present their compliance framework to the government in light of the harm that occurred. An established framework that incorporates the steps set out above will demonstrate competence and build trust with the government, potentially reducing liability. The government also urges companies to promptly report any misconduct, rather than sit on evidence of wrongdoing.  

While companies are increasingly adopting the technological innovation that is AI, such influential technology demands consideration of its abilities and limitations. Understanding the risks associated with AI and developing a plan to address them can allow companies to fully embrace the benefits of the technology, without being blindsided by its potential to cause harm.  

Key takeaways 

  1. Consider conducting a risk assessment on a regular basis to identify your company’s use of AI technology and determine vulnerabilities and areas of AI-related risk. 

  2. Consider designing and implementing a companywide AI policy to govern use.  Since DOJ will be monitoring whether companies have thoughtfully considered AI risks and implemented policies and controls to mitigate risks, it will be necessary to have clearly articulated policies and procedures in place and well-documented training as appropriate.  

  3. Consider AI’s potential effects on current controls that safeguard data and regularly monitor, test, and bolster those controls to mitigate AI-related risks on a regular basis.

  4. Regularly review the public statements that are made about your company’s use of AI, including in SEC disclosures, to ensure they are up to date, accurate, and not misleading.

  5. Consider placing an AI technology expert on the board or making such an expert available for consultation. 

  6. Monitor AI-related enforcement actions as these are good indicators of DOJ’s expectations of companies’ compliance efforts.

  7. Monitor the new and pending legislation in the jurisdictions where your company does business to ensure compliance with potentially new and various frameworks.  

To further understand the specific compliance risks associated with using AI in your business, please contact any of the authors or your DLA Piper relationship attorney. 

 
Print