Add a bookmark to get started

Architectural red wave
12 July 202416 minute read

EU publishes its AI Act: Key steps for organizations

On July 12, 2024, the European Union (EU) published Regulation (EU) 2024/1689 (AI Act), the world’s first comprehensive artificial intelligence (AI) regulation. The publication comes more than three years after the proposed original text was released by the EU Commission in April 2021.

The AI Act serves as a comprehensive, sector-agnostic, regulatory regime intended to form the foundation of AI governance and regulation across the EU, with downstream implications for companies and developing legislation around the world.

Though the AI Act is primarily focused on governing organizations and individuals within the borders of the EU, the Act has broad international reach. Organizations far outside Europe, such as those operating within the US, may be subject to the Act’s requirements even where they have no presence within an EU Member State.

Publication is the final step in the years-long development of the Act, which will enter into force on August 1, 2024. Organizations are encouraged to take steps to understand how they may be affected by this key European regulation, their obligations under the Act, and when they must comply with the Act’s requirements.

This update provides a high-level overview of the key elements of the AI Act which organizations are urged to understand to ready themselves for compliance with the text, and it sets out key dates for when applicable obligations will take effect.

The AI Act: A human impact-focused regulation

The AI Act approaches the regulation of AI with a primary focus on preventing harm to the health, safety, and fundamental rights of individuals within the EU.[1]

To accomplish this goal, the Act creates a risk-based framework that establishes obligations on individuals and organizations that are dependent on their role in the value chain of an AI System (as defined below), the risk of the technology involved, and/or the risk that arises from the context of its use (eg, in an employment setting).

Defining the undefinable: What is an AI System?

Under the Act, many technologies that are often associated with the concept of AI do not fall within the defined category, and they are therefore not subject to the scope of the regulation. Organizations are encouraged to have an understanding of the AI Act’s definition of an “AI System.”

The definition of an “AI System” has seen many iterations, and it has developed significantly since the text was originally introduced. The most significant changes followed the rapid increase in popularity of AI Systems, and the exponential improvement in the capability of generative AI, large language models (LLMs), and other advanced frontier technologies.

In the published text, the definition of AI System is settled as:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

Much like many emerging regulations around the world, including those under development in the US, the final definition of the text closely follows and expands on key elements of the definition of AI System developed by the Organization for Economic Cooperation and Development, published in November 2023.

Artificial intelligence has its risks

The AI Act takes a risk-based approach to regulation of AI Systems, based on a wider framework of identifying and managing AI risks developed by the EU.

The framework identifies four categories of risk:

  • Unacceptable Risk: A classification reserved for AI Systems or uses that pose significant risk of harm and unacceptable risks to individuals and their rights (eg, a system designed to manipulate elderly members of society).
  • High Risk: A classification for AI Systems and uses which fall within specific High-Risk categories of use cases (eg, in the course of employment) and system types (eg, those which require assessment under existing regulation such as the Medical Devices Regulation) and are not otherwise exempted or prohibited.
  • Limited Risk: A classification for AI Systems or uses that do not fall within the High-Risk category but do pose certain transparency risks and requirements not associated with Minimal Risk systems (such as chatbots).
  • Minimal Risk: A classification for AI Systems or uses with minimal impact on individuals and their rights (eg, spam filters), and are largely unregulated by the AI Act directly (and are instead regulated by other EU-wide and national legislation).

This framework is translated into the provisions of the AI Act through three broad categories:

  1. Prohibited AI Practices (Unacceptable Risk)
  2. High-Risk AI Systems (High Risk), and
  3. Non-stated catch-all category with varying levels of obligations and compliance requirements (Minimal and Limited Risk).

Exemptions

The Act exempts several categories of AI Systems from its scope, including:

  • Exclusively personal and non-commercial uses
  • Certain military and national security uses
  • Certain law enforcement activities
  • Certain scientific (but not commercial) research and development
  • Certain research and testing (including for commercial purposes) within designated testing environments
  • Free and open-source systems, and
  • Certain AI Systems already in service within 24 months after the date of entry into force.

Obligations under the AI Act

The Act imposes various obligations on those developing and implementing AI Systems. These obligations differ depending on the risk category assigned to the AI System. In some cases, they may vary based on whether the organization or individual has developed the AI System and is considered a “Provider,” or whether they are simply using the system and are considered a “Deployer.”

For example, organizations implementing limited-risk AI Systems must comply with minimal obligations which primarily center on increasing transparency with users. Conversely, Deployers and Providers of High-Risk AI Systems must adhere to more stringent requirements, which vary based on their designation. These requirements include establishing comprehensive documentation, ensuring the AI System meets certain thresholds for accuracy and robustness, and performing fundamental rights impact assessments.

The AI Act also imposes certain transparency obligations on all AI Systems it regulates. For instance, in many circumstances the Act requires that AI Systems disclose to users that they are interacting with AI, while in other circumstances the creation of technical documentation is required.

General-purpose AI: A parallel classification

In parallel to the risk classifications, certain AI Systems may also fall within the definition of general-purpose AI, which triggers additional compliance obligations.

The AI Act breaks this concept into two types of technologies:

  • General-purpose AI models (GPAIM):an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.”
  • General-purpose AI Systems (GPAIS): “an AI system which is based on a general-purpose AI model and which has the capability to serve a variety of purposes, both for direct use as well as for integration in other AI system.”

The common characteristic of these technologies is their ability to be applied to a variety of purposes and tasks, rather than a “narrow” scope.

The AI Act approaches regulation of this type of technology through the classification and regulation of the underlying GPAIM as either a GPAIM, or a GPAIM with systemic risk.

GPAIMs with systemic risk are those models that exhibit certain riskier characteristics, such as:

  • High-impact capabilities (eg, its cumulative amount of computation used for its training measured in floating point operations is greater than 1025),[2]  or
  • Being categorized as a model with systemic risk by the EU Commission after being alerted by the EU scientific panel.

Where an organization is involved in the creation, provision, or deployment of a standard GPAIM, they will be required to comply with several obligations, including:

  • Creating and maintaining technical documentation that can be used by the AI Office, national regulators, and downstream Providers/Deployers
  • Ensuring that policies are in place to require compliance with EU law on copyright, intellectual property, and other related rights, and
  • Developing and making available a detailed summary that describes the content used for training the model.

Where an organization is involved in the creation, provision, or deployment of a GPAIM with systemic risk, they will be required to comply with additional, stricter, obligations, including:

  • Performing model evaluations in accordance with industry-standardized protocols and tools, reflecting the state-of-the-art, and commensurate to the risk
  • Assessing and mitigating systemic risks at an EU-wide level
  • Tracking, documenting, and reporting any serious incidents that may occur through the GPAIM, and identifying any possible corrective measures, and
  • Ensuring an adequate level of digital and physical robustness and cybersecurity.

It should be noted that these obligations are not a replacement for those required under the risk classification of an AI System. If a GPAIS is considered to utilize GPAIMs with systemic risk, for example, and is also a High-Risk AI System, then the Provider will be required to comply with both sets of obligations.

Timeline for compliance


EU AI Act_timeline for compliance_Design_V1GR

 

Now that the AI Act has been published, its requirements will begin to take effect on a rolling basis over the next several years.

For example, organizations will have six months to ensure they do not use any AI Systems or technologies that may pose unacceptable risk. Similarly, many organizations will have twenty-four months to ensure they have the operational requirements in place to develop, provide, and/or deploy High-Risk AI Systems in a compliant manner (subject to certain exemptions outlined above).

Organizations should consider whether they are subject to the AI Act and, if so, how they intend on complying with their obligations.

Sanctions for non-compliance

If an organization fails to implement these changes in time, or disregards their obligations once the AI Act is fully in force, they may be subject to heavy penalties for their non-compliance, including:

  • Fines of up to €35 million, or 7 percent of global annual turnover of the preceding year (whichever is higher), for failure to comply with obligations relating to Prohibited Practices
  • Fines of up to €15 million, or 3 percent of global annual turnover of the preceding financial year (whichever is higher), for failure to comply with obligations relating to general-purpose AI or High-Risk AI Systems
  • Fines of up to €7.5 million, or 1 percent of global annual turnover of the preceding financial year (whichever is higher), for supplying incorrect, incomplete, or misleading information to governing body/authorities in response to their requests.

The European AI Act: A global standard?

Beyond its international reach, the EU’s approach to the regulation of AI is increasingly appearing to be the standard for governments and regulators across the world. In many cases, we are already seeing its impact in the US (such as in Colorado and California), and other international jurisdictions (such as Canada), who have openly stated that they are aiming to map their approaches to their EU counterparts.

Organizations in the US (and abroad) to are advised to begin considering how they intend to comply with the AI Act, and what this could mean for compliance at a local level. By acting early, organizations may be better suited to carefully adapt their compliance methodology to best fit their operations, while ensuring compliance with imminent laws and those that are anticipated in the near future.

DLA Piper is here to help

Five years ago, DLA Piper established the first major law firm AI practice. DLA Piper’s AI and Data Analytics practice is a cross-functional team of more than 100 AI lawyers, data scientists, statisticians, software developers, policy strategists, and cyber and policy counsel.

DLA Piper’s AI focus encompasses both internal and external components – integrating AI into legal practice, and assisting clients in designing AI solutions for their businesses that comply with laws, regulations, and their own internal policies.

We have collaborated with leading institutions across industries – from the National Institute of Standards and Technology and Stanford Law, to the Mayo Clinic and the United Nations (UN) – to define the standards for safe AI.

Both the UN and the US AI Science Envoy have publicized our work on AI and law, calling it, in the words of the UN, “profound insights on the intersection of AI, law, and ethics.”

Our work on the development of legal red teaming has been highlighted by academics as a crucial step in the effective governance of generative AI, including multiple citations in the future Handbook of the Foundation and Regulation of Generative AI (Oxford University Press, forthcoming 2024).

We have been deeply involved throughout the creation of the European AI Act, advising clients across Europe and internationally in key meetings with EU legislators. We have also provided our insights to the European Commission, Organization for Economic Cooperation and Development, International Standards Organization, the United Kingdom, the National Institute of Standards and Technology, and others involved in standard-setting and regulatory developments.

DLA Piper advises numerous Fortune 10, 50, 100, and 500 companies, global brands, and large language model innovators on AI adoption, compliance, and litigation.

The Financial Times recognized DLA Piper’s AI team as the Innovative Technology Practice of the Year, and Law.com awarded us the 2024 Law Firm Best Use of AI.

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

For further information or if you have any questions, please contact any of the authors.

Print