Add a bookmark to get started

31 December 202414 minute read

Innovation Law Insights

31 December 2024
Podcast

EDPB opinion on AI model Training: How to address GDPR compliance?

The European Data Protection Board’s Opinion 28/2024 represents a landmark effort to clarify how the GDPR applies to AI models.

With organizations increasingly turning to AI for decision-making, customer service, fraud detection, and personalization, the question of how to reconcile these technologies with stringent data protection laws has never been more pressing.

You can watch the episode of our podcast on the EDPB’s opinion on AI training and an article on the topic here.

 

Data Protection and Cybersecurity  

Landmark decision of the Garante against and AI-powered chatbot

The Italian Data Protection Authority (the Garante) recently issued a significant ruling addressing breaches of the GDPR by an AI-powered chatbot.

More specifically, the investigations by the Garante were triggered following a data breach the chatbot suffered on March 20, 2023. Following the event, the investigation revealed that the company had breached additional obligations of the GDPR, leading to a fine of EUR15 million.

We outline below the main violations found by the Garante.

Lack of data breach notification to the Garante

The Italian data protection authority noted that the company failed to notify the Garante about the breach in a timely manner, as required under Article 33 of the GDPR, despite the breach’s potential to cause significant risks to affected individuals.

It highlighted that since, at the time of the events, the company had no establishment in the EU, it had to notify the data breach to all the EU data protection authorities whose residents had been affected as it had no lead supervisory authority at that time.

Lack of legal basis for processing by the generative AI model

The Garante found the company in violation of Articles 5(2) and 6 of the GDPR for failing to identify a valid legal basis for processing personal data to train its AI model before launching the service.

The company claimed that the processing to provide the AI service was based on the performance of a contract and that algorithm training relied on legitimate interest. However, the Garante determined that the company hadn’t formalized these legal bases before the service’s launch. Moreover, the documentation submitted later, such as a Data Protection Impact Assessment (DPIA) and a Legitimate Interest Assessment (LIA), were drafted months after the service went live.

Given company’s establishment in Ireland, the Italian authority referred the matter to the Irish Data Protection Commission as the lead supervisory authority under GDPR Article 56 for further evaluation and action regarding the use of legitimate interest as a legal basis.

Lack of transparency due to the unclear privacy notice

The Garante found that the company violated Articles 5(1)(a), 12, and 13 of the GDPR due to significant deficiencies in its privacy policy. The issues primarily related to a lack of transparency, accessibility, and completeness in the information provided about how personal data was processed, especially for training AI models.

The investigation revealed that the privacy policy was only available in English and not easily accessible. Users were unable to review the privacy policy before providing their data, as it was poorly positioned on the registration page. The privacy policy only addressed the data collected for using the chatbot service, without providing any information about how personal data, including publicly available data from non-users, was processed when training AI models.

The language in the privacy policy was found to be vague and unclear. Terms used like “improving services”, failed to communicate the specific purpose of training AI models, such as fine-tuning or advanced AI research. This lack of clarity made it difficult for individuals to understand the nature and scope of the data processing activities, which included the innovative and complex use of AI technology.

The company argued that it had taken steps to provide transparency through privacy policies, pop-ups, and publications, including research documents and technical notes made available since 2019. However, the Garante concluded that these efforts were insufficient. The privacy policy didn’t provide critical information, such as the legal basis for data processing or the potential impacts of training activities on individuals’ data. The use of supplementary documents didn’t fulfill GDPR requirements, as users and non-users weren’t reasonably expected to seek out or review such materials independently.

Lack of age verification for minors

Another critical issue addressed by the Garante was the protection of minors’ data. The investigation revealed violations of GDPR Articles 24 and 25(1) for failing to implement adequate systems to verify the age of users registering for the chatbot.

More specifically, the terms of service stated that minors between 13 and 18 years old required parental consent to use the service, but no mechanisms were in place to enforce this requirement. This omission allowed all users, including minors, to access the service without age verification or parental involvement.

The Garante noted that a lack of common European standards for age verification doesn’t exempt data controllers from their responsibility to verify the contractual capacity of users, as required by GDPR.

Conclusions

This decision concludes one of the Garante’s first proceedings against a company offering AI-driven services directly to end-users.

It highlights the Garante’s restrictive vision on AI technology, specifically with reference to accountability and transparency principles. Companies providing AI-powered services directly to end-users have to carefully consider adopting appropriate measures to verify the age of the users. 

The decision sets an important precedent, signaling that regulatory authorities will closely scrutinize the operation of AI technologies and their alignment with privacy and data protection laws. For businesses, this highlights the need to integrate compliance into the core design and functionality of their AI systems.

It will be interesting to see how the scenario will change after the EDPB’s opinion on AI training. You can read an article about it here.

Author: Roxana Smeria

 

Intellectual Property

AI and Copyright: European Commission’s opinion on the Italian Draft Law

On November 5, 2024, the European Commission issued a detailed opinion (C (2024) 7814) addressing Italy’s draft law on AI, criticizing several aspects almost article by article. Concerns were raised over multiple provisions, including those related to the interaction between AI and copyright.

In April 2024, Italy approved a draft law introducing measures to regulate the use of AI, aiming to comply with the European regulatory framework and protect fundamental rights, including copyright. The draft law establishes regulatory criteria to balance the opportunities offered by new technologies with the risks associated with their improper use, underutilization, or harmful deployment. It includes specific provisions to ensure transparency, safety, and user rights protection concerning AI-generated or modified content. But the European Commission identified issues in some provisions, requesting changes to prevent overlaps with the EU AI Regulation. Among the targeted provisions are those addressing the relationship between AI and copyright.

The draft law approaches the issue of copyright and AI primarily through two measures:

  • Identifying AI-generated content (Article 23)
  • Protecting copyright for works created with AI assistance (Article 24)

Key provisions

Article 23

This article introduces the requirement to identify AI-produced or AI-modified content with a visible marker, such as a watermark with the acronym “AI” or an audio announcement for sound content. The identification must appear at the beginning and end of the transmission and after every commercial break, with exceptions for content that is manifestly artistic or satirical. The goal is to ensure that users are aware of the artificial nature of the content they interact with.

Article 24

This article amends the copyright law by introducing specific rules for works created with AI systems. Key provisions include:

  • Recognizing copyright ownership: AI-generated content cannot be considered intellectual creations under copyright law unless there is human creative input.
  • Mandatory licensing for copyrighted works: AI system providers must obtain specific licenses for the use of copyrighted works when training their models.

The European Commission’s concerns

In its opinion, the Commission raised specific concerns about Article 23, emphasizing the risk of overlapping with the EU AI Regulation:

  • Article 23(1)(b): The provision requiring AI-generated content to be clearly identified with a visible marker or audio announcement was deemed redundant compared to the obligations under Article 50(2) and (4) of the EU AI Regulation.
  • Article 23(1)(c): The requirement for video platform providers to protect the public from AI-generated or AI-modified informational content presented as real was criticized as unclear and overlapping with Articles 50(1), (2), and (4) of the EU AI Regulation.

The opinion also referenced case law from the Court of Justice of the EU (Cases 34/73, Fratelli Variola, and 50/76, Amsterdam Bulb), reiterating that Member States are prohibited from duplicating provisions of an EU regulation in national law, obscuring their origin in EU law.

Conclusion

Although the Italian legislature sought to enhance transparency and copyright protection in the AI domain, the proposed measures could create regulatory conflicts, hindering the uniform application of the EU AI Regulation. Specifically, duplicating or overlapping rules at the national level may lead to legal uncertainty and potentially undermine the coherent implementation of European law.

Author: Maria Vittoria Pessina

UK: Government begins consultation on copyright and AI

On December 17, the UK government launched a public consultation on copyright and AI, consisting of 47 questions directed at professionals of the sector. They have until February 25, 2025, to share their opinions.

As a consequent of Brexit, the UK is no longer bound by the implementation of the Copyright Directive in the Digital Single Market, which introduced important changes to copyright law, including exceptions to allow text and data mining (TDM) under certain conditions. The only exception currently in force in the UK regarding TDM is provided by the Copyright, Designs and Patents Act 1988, which doesn’t cover commercial activities.

This highlights the urgency of updating the legal framework regarding copyright and introducing a new exception that takes into account the increasingly prominent role of AI, while balancing the interests of rights holders who should be compensated for the use of their works in AI training. A consultation on AI-generated works was already initiated in 2021, exploring the possibility of extending the text and data mining exception to commercial activities; but this measure wasn’t implemented.

The consultation addresses transparency, technical tools, and labeling.

Regarding transparency, according to the UK government, a successful synergy between copyright and AI will depend on strengthening the relationship between developers and rights holders. It’s necessary to consult industry professionals on the level of transparency required for the use of works to train AI models.

As for technical tools to protect copyright, while there are already numerous such tools, there’s a need for further implementation to balance the rights of copyright holders with those of AI system developers.

The government is also considering the possibility of labeling a work as “AI-generated”. This process, undoubtedly beneficial for rights holders and the public, presents a significant technical challenge.

Although the most desirable option for UK policymakers is a reform that extends the copyright exception for TDM to commercial purposes, the government, through the consultation, hasn’t ruled out maintaining the current legal framework (Option 0). The other three options, however, propose changes. The first option would aim to enhance copyright protection by requiring licenses whenever training an AI model. The second option would introduce a broad exception for data mining, allowing the extraction of data from copyright-protected works without the rights holders’ permission. The third scenario would involve establishing an exception for data extraction from copyrighted works, supported by transparency measures to ensure that AI developers are clear about the works used to train their models.

We’re waiting for the outcome of the consultation, which could pave the way for a future where copyright and AI coexist in a fair and transparent manner, responding to the needs of a rapidly evolving sector.

Author: Noemi Canova

 

Technology Media and Telecommunication

Public consultations by BEREC on Draft Reports regarding copper network switch-off, regulation of access to physical infrastructure, and Infrastructure Sharing

The BEREC (Body of European Regulators for Electronic Communications) has recently launched three public consultations on the draft reports concerning:

  • managing the copper network switch-off;
  • regulating physical infrastructure access;
  • infrastructure sharing as a lever for the environmental sustainability of networks and electronic communications services.

The draft reports were approved during the BEREC plenary meetings held on December 5 and 6.

The draft report on managing copper network switch-off aims to provide an update on the progress of copper network switch-off in Europe. The report also includes some insights regarding the management of the switch-off and its regulation by the most advanced countries.

As indicated by BEREC, the draft report is based on data provided by the national regulatory authorities (NRAs) of 31 European countries. The document first provides an overview of the current state of migration and copper network switch-off, as well as the related future plans (Chapters 2, 3, and 4). It then continues with a detailed analysis of the rules established by NRAs for the migration process and the copper switch-off (Chapter 5) and with a description of further measures that NRAs could adopt to facilitate this process (Chapter 6). Lastly, the report describes the lessons learned by NRAs in recent years while managing the migration process from copper networks to those based on more advanced technologies.

The draft report on regulating access to physical infrastructure focuses on access to physical infrastructure for the development of very high-capacity fixed networks. The draft provides an overview of access to physical infrastructure (whether owned by electronic communications operators or not) in Europe, as well as the strategies adopted by electronic communications operators when they intend to expand their network and need to use elements of physical infrastructure.

The report also examines how NRAs have considered physical infrastructure in the context of their relevant market reviews; whether, and at what stage, physical infrastructure was evaluated; and whether obligations related to access to physical infrastructure were imposed on operators with significant market power (Section 3). The subsequent sections illustrate the data collection efforts conducted by national authorities to carry out their market reviews (Section 4) and the remedies applied by them (Section 5), as well as asymmetric and symmetric regulation in the context of encouraging the development of very high-capacity networks, also providing examples on the experiences of some countries and verifying the interplay between the two types of regulation to achieve connectivity goals (Section 6). Section 7, instead, reports the expectations and future challenges identified by NRAs.

The third draft report analyzes infrastructure sharing as a lever for environmental sustainability of electronic communications networks and services, aligning with the broader EU objectives to reduce the environmental impact of the Information and Communications Technology (ICT) sector.

In accordance with the goals set out under the EU Green Deal and the United Nations’ 2030 Agenda, BEREC examines how regulatory tools might enhance the environmental performance of telecommunications by minimizing the environmental footprint associated with network deployment and operation. The document builds on previous BEREC publications on infrastructure sharing and bases its analysis on a survey distributed among NRAs and a consultation of stakeholders conducted in the context of a technical workshop.

Interested parties can submit their contributions to the public consultations on the draft reports on managing copper network switch-off and on infrastructure sharing by January 31, 2025. Contributions to the public consultation on the regulation of access to physical infrastructure can be submitted until February 19, 2025.

Authors: Massimo D’Andrea, Flaminia Perna, Matilde Losa, Arianna Porretti


Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo BardelliCarolina BattistellaCarlotta BusaniGiorgia Carneri, Noemi Canova, Gabriele Cattaneo, Maria Rita CormaciCamila CrisciCristina CriscuoliTamara D’AngeliChiara D’OnofrioFederico Maria Di VizioNadia FeolaLaura GastaldiVincenzo GiuffréNicola LandolfiGiacomo LusardiValentina MazzaLara MastrangeloMaria Chiara MeneghettiDeborah ParacchiniMaria Vittoria Pessina, Marianna RiedoTommaso RicciRebecca RossiRoxana SmeriaMassimiliano Tiberio, Federico Toscani,  Federico Toscani, Giulia Zappaterra.

Articles concerning Telecommunications are curated by Massimo D’AndreaFlaminia Perna and Matilde Losa.

For further information on the topics covered, please contact the partners Giulio CoraggioMarco de MorpurgoGualtiero DragottiAlessandro FerrariRoberto ValentiElena VareseAlessandro Boso CarettaGinevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer”, the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as a report analyzing key legal issues arising from the metaverse qui, and a comparative guide to regulations on lootboxes here.

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.

Print