EU AI Act’s ban on Prohibited Practices takes effect
If organizations do not adhere to compliance requirements under the Act, they may be subject to substantial penalties.The first compliance deadline set out in the European Union (EU)’s Regulation (EU) 2024/1689, or the EU AI Act, took effect on February 2, 2025.
Originally published on July 12, 2024, before going into force on August 2, 2024, the EU AI Act is the world’s first comprehensive AI regulation, establishing a sector-agnostic, regulatory framework designed to shape AI governance and oversight across the EU. While the Act primarily regulates entities and individuals within the EU, its reach is extraterritorial, and companies operating outside of Europe, including those operating in the United States, may still be subject to its requirements.
This week’s compliance deadline requires both Deployers and Providers of AI Systems to cease placing on the market, putting into service, or using AI Systems that leverage Prohibited AI Practices. These Prohibited Practices, defined in Article 5 of the Act and outlined below, are considered to present “unacceptable risk” of harm to individuals and their rights.
If organizations do not adhere to compliance requirements under the Act, they may be subject to substantial penalties. Failure to comply with obligations related to Prohibited Practices may result in fines of up to EUR35 million or 7 percent of global annual turnover of the preceding year, whichever is higher.
Prohibited Practices under the EU AI Act
The Act enumerates eight categories of prohibited AI. While some of these categories are most likely to apply to governmental entities or actors, several impact private industry.[1]
See our flowchart for identifying Prohibited Practices in an AI System.
Compliance with the Act: What’s next?
The EU AI Act’s staggered compliance timeline will continue to take effect on a rolling basis over the next several years, with the Act’s next compliance requirement taking effect on August 2, 2025. At that time, Providers of General-Purpose AI Models must comply with the Act’s transparency obligations, such as maintaining technical model and dataset documentation. Chapter IV.
In parallel with the Act’s rolling compliance obligations, the EU intends to develop the required governmental infrastructure and continue to promulgate guidance to facilitate compliance with the Act’s obligations. As part of this effort, by May 2, 2025, the European Artificial Intelligence Office plans to issue Codes of Practice, providing guidance to Providers of General-Purpose AI Models regarding their obligations under the Act. Art. 56. Further, on August 2, 2025, the European Commission plans to issue guidance to facilitate the reporting of serious AI System incidents by Providers of High-Risk AI Systems. Art. 73.
DLA Piper is here to help
DLA Piper’s team of lawyers and data scientists assist organizations in navigating the complex workings of their AI Systems to help ensure compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impacts on industry across the world.
At the Financial Times’s 2024 North America Innovative Lawyer awards, DLA Piper was conferred the Innovation in New Services to Manage Risk award for its AI and Data Analytics practice.
For more information on AI and the emerging legal and regulatory standards, please visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI Strategy through our AI ChatRoom series.
For further information or if you have any questions, please contact any of the authors.
[1] Summarized for illustration only. We encourage organizations to refer to the full text of Article 5 and confer with counsel.