European Commission publishes guidelines on Prohibited AI Practices
Navigating the EU's AI Act: Key guidelines on Prohibited AI PracticesOn February 4, 2025, two days after the European Union Artificial Intelligence Act (EU AI Act)’s ban on AI Systems that leverage Prohibited Practices went into effect, the European Commission published Commission Guidelines on Prohibited Artificial Intelligence Practices in draft form (Draft Guidelines).
These Draft Guidelines provide additional clarification and context for the types of AI practices that are prohibited under the Act. They provide direction to 1) surveillance authorities in their enforcement efforts and 2) Deployers and Providers in their efforts to comply with the Act. While not binding or authoritative, the Draft Guidelines are intended to promote consistent application of the EU AI Act across the European Union (EU). The Draft Guidelines have been approved by the European Commission but have not been formally adopted at this time.
Under Article 5 of the EU AI Act, AI Systems that leverage Prohibited Practices are considered to pose an unacceptable risk to fundamental rights and EU values.
Enforcement of the EU AI Act is assigned to market surveillance authorities designated by the Member States and the European Data Protection Supervisor. There are heavy penalties for non-compliance with provisions dealing with Prohibited Practices, including fines of up to EUR35 million or 7 percent of global annual turnover of the preceding year (whichever is higher).
Key takeaways
- The Draft Guidelines aim to increase clarity and provide insight into the Commission’s interpretation of Prohibited Practices under Article 5 of the Act.
- The Draft Guidelines are lengthy but are still in draft form and, even when finalized, will be non-binding. All guidance provided therein is subject to the formal requirements set forth in the Act.
- Though the Draft Guidelines are not comprehensive, they are a helpful step in assessing whether an AI System qualifies as prohibited under the Act.
Selected clarifications and examples from the Draft Guidelines
For each of the Prohibited Practices outlined in Article 5 of the Act, the Draft Guidelines provide an overview of the main components of the provision, practical examples, clarification of practices that are out of scope from the prohibitions, and measures that can be taken to avoid providing or using AI systems in ways that are likely to be prohibited. The Draft Guidelines also highlight where the prohibitions overlap or are related to other Union legal acts.
The below table is intended to highlight key clarifications and examples from the Draft Guidelines based on questions that we frequently receive from clients. The examples assume that all of the required elements of each prohibited category are otherwise met (except where noted). Additionally, it is important to note that the overarching exceptions set forth in Article 2 (eg, national security) apply to the prohibited practices and are relevant to the practical application of these categories.
Prohibited Practice |
|
Key insights from the Draft Guidelines |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
Example of safety/medical exception (not prohibited):
|
|
|
|
|
|
|
|
|
DLA Piper is here to help
DLA Piper’s team of lawyers and data scientists assist organizations in navigating the complex workings of their AI Systems to help ensure compliance with current and developing regulatory requirements. We continuously monitor updates and developments arising in AI and its impacts on industry across the world.
At the Financial Times’s 2024 North America Innovative Lawyer awards, DLA Piper was conferred the Innovation in New Services to Manage Risk award for its AI and Data Analytics practice.
For more information on AI and the emerging legal and regulatory standards, please visit DLA Piper’s focus page on AI.
Gain insights and perspectives that will help shape your AI Strategy through our AI ChatRoom series.
For further information or if you have any questions, please contact any of the authors.