Add a bookmark to get started

Lights
29 August 202417 minute read

Innovation Law Insights

29 August 2024
Artificial Intelligence

New US legislation aims to regulate AI-generated content

A group of US senators has recently filed an ambitious bill aimed at making it easier to authenticate and identify AI-generated content. This legislative initiative, known as the COPIED ACT (Content Origin Protection and Integrity from Edited and Deepfake Media Act), seeks to establish federal guidelines to ensure transparency in AI-generated content and protect copyright in an era increasingly dominated by digital manipulation and automated content creation.

The primary goal of the COPIED ACT is to task the National Institute of Standard and Technology (NIST) with developing advanced transparency guidelines and standards. These tools are intended to facilitate the identification of content origins and distinguish between synthetic and human-originated content. Specifically, the bill proposes the introduction of mandatory “watermarking” on content generated or modified using AI, ensuring that the origin of the content is clearly identifiable and not subject to manipulation.

Developers of applications and users of tools for creating copyright-protected content will have to adopt measures to provide users with comprehensive information about the origin of AI-generated content within two years of the law being enacted. The legislation prohibits the removal, modification, or disabling of information about the origin of content, except for specific purposes such as research and security, ensuring greater traceability and accountability in the use of digital content.

Another crucial aspect of the bill is the protection of copyright, including the explicit prohibition of using copyright-protected content to train AI systems or create synthetic content without the explicit and informed consent of the right holders. This provision not only protects creators from potential misuse of their works by algorithms but also ensures they’re fully informed and can define the terms of use for their content.

The COPIED ACT has already garnered broad support, particularly from unions and associations in the entertainment sector such as SAG-AFTRA, the National Music Publishers’ Association, the Recording Industry Association of America, and the News Media Alliance. These groups have welcomed the initiative as a significant step toward protecting the interests of their members in an increasingly digital and complex media landscape.

Deepfake regulation in Europe

In addition to defining guidelines, the COPIED ACT mandates that NIST, USPTO, and USCO promote public awareness about content manipulated by AI, including deepfakes. This commitment aims not only to educate the public about the broad impact of manipulated digital content but also to encourage greater awareness of the risks associated with their unauthorized dissemination.

The issue of deepfakes is at the centre of international legislative debate, as highlighted by the AI Act. It includes specific provisions to ensure that deepfake content is clearly identified as such when used in public or commercial contexts. This measure is essential to protect freedom of expression and to prevent potential abuses of copyright in creative, satirical, or artistic situations.

With the introduction of a similar regulatory framework, the US is following the EU’s example in attempting to regulate AI use to protect copyright holders from the increasingly frequent digital manipulations and alterations of their content. These efforts are crucial in striking a balance between technological innovation and the protection of fundamental rights in the context of an evolving digital society.

Authors: Maria Vittoria Pessina, Alessandra Faranda

Why is AI training recommended and mandatory?

A new study reveals a lack of knowledge among employees about how to use AI, which runs counter to C-level expectations that it will boost productivity. But are managers aware that AI training and internal rules are mandatory under the EU’s AI Act and could protect their companies from significant risks?

The results of research into employee use of AI

Recent research reveals a striking gap between executive expectations and employee experiences with AI. While 96% of C-suite executives anticipate AI will enhance productivity, 47% of employees are unsure how to realize these benefits, and 77% feel that AI has actually added to their workload.

What’s causing this disconnect? A possible explanation is the lack of effective leadership. Less than 26% of executives report having AI training programs in place, and just 13% say their company has a well-developed AI strategy.

These findings suggest that many companies are diving into AI investments without fully grasping the cultural changes required for successful implementation. Without this understanding, the rush to embrace AI could lead to squandered resources and missed opportunities, rather than the competitive edge they seek.

But let’s get to the core of the issue, which is even more complicated.

Why is AI training highly recommended?

The above research shows how adopting AI solutions without training for employees could work against the company’s interests.

At the same time, AI training:

  • is necessary to prevent employees from using AI solutions inappropriately, such as providing copyrighted content, company trade secrets, or personal data to AI solutions that haven’t been approved by the company; and
  • requires the company to have internal policies on how to use AI solutions and which solutions can be used.

Otherwise, employers may not be able to challenge any employees’ misconduct due to a lack of internal rules. In fact, we often see companies implementing AI solutions, often as part of pilot programs, just to see how it’s received by employees without adopting any internal rules or running internal training because of the limited scope of the initiative.

But feedback won’t be useful if employees aren’t trained on the benefits and limitations of AI.

Why is AI training mandatory under the EU AI law?

Article 4 of the EU AI Act refers to “AI literacy” and states that:

“Providers and deployers of AI systems shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in, and considering the persons or groups of persons on whom the AI systems are to be used”.

AI literacy is defined under the AI Act as “skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations in the context of this Regulation, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause”. Recital 20 of the EU AI Act provides that “deployers should ensure that the persons assigned to implement the instructions for use and human oversight as set out in this Regulation have the necessary competence, in particular an adequate level of AI literacy, training and authority to properly fulfil those tasks”.

Based on the above, the obligation to provide AI training to personnel:

  • applies to both providers and users, which means that any company using AI solutions is obliged to train its staff, and that the training must be differentiated according to the actual use of AI solutions that each category of staff is expected to perform;
  • requires that AI training be accompanied by internal policies and procedures on how AI will be used and how AI solutions can be approved.

Interestingly, Article 4 is one of the provisions that will become effective on February 2, 2025.

So companies have less than six months to adopt policies and procedures on the appropriate use of AI solutions and to conduct AI training for their employees.

Author: Giulio Coraggio

 

Data Protection and Cybersecurity

Is privacy for generative AI at turning point?

The Hamburg Data Protection Authority’s position on the lack of personal data processing by large language models (LLMs) during data storage, if combined with CNIL’s recent view, might signal a 360-degree change in privacy authorities’ approach to data processing performed by generative AI.

Hamburg Data Protection Authority’s view on data storage by LLMs

In a previous, I discussed the paper issued by the Hamburg Data Protection Authority. Unlike traditional data systems, this paper argued that LLMs process tokens and vector relationships (embeddings). These tokens fragment the original information into such small parts that storing them doesn’t constitute the processing of personal data.

According to the authority, tokens and embeddings in LLMs lack the direct and identifiable link to individuals required by CJEU jurisprudence to be classified as personal data. And when LLMs respond to prompts, they generate new information that cannot be considered a copy of the original due to the “creation” phase.

While it may be possible to extract training data from LLMs, developers of AI solutions have to ensure that outputs can’t be deemed copies or even derivative works of the original content, implementing the appropriate guardrails.

CNIL’s innovative interpretations on privacy and generative AI

The Hamburg authority’s position aligns closely with the views expressed by the French Data Protection Authority (CNIL) in its current consultation on applying the GDPR to AI models.

The CNIL has asked stakeholders “to shed light on the conditions under which AI models can be considered anonymous or must be regulated by the GDPR”.

The CNIL has also shown a more open approach to relying on legitimate interests as a legal basis for developing AI systems, which is crucial for the data collection phase necessary for AI training. The CNIL emphasizes that the legitimate interests underlying data processing must be clearly defined in the Legitimate Interest Assessment (LIA), and the commercial purpose of developing an AI system doesn’t contradict the use of legitimate interest as a legal basis.

In any case, developers must ensure that data processing is essential for development and doesn’t threaten individuals’ rights and freedoms.

Potential convergence of views between Hamburg and CNIL on AI

If we combine the positions of the Hamburg Data Protection Authority and CNIL, developers and deployers might have found major support in maintaining the GDPR compliance of data processing through generative AI solutions. Specifically:

  • Collected data could be processed based on legitimate interest, but with most relevant personal data automatically removed immediately after collection to reinforce the LIA.
  • Only filtered data should be provided to the AI model for training, strengthening the argument that tokens stored by LLMs don’t qualify as personal data.
  • Guardrails should be in place to ensure outputs can’t be copies or derivative works of any data used for training.

This approach should be supported by a detailed Data Protection Impact Assessment (DPIA) and a Legitimate Interest Assessment (LIA) and could offer significant protection for companies developing and exploiting AI solutions.

Additionally, this approach could be valuable in defending against intellectual property challenges, as it aligns with the Text and Data Mining (TDM) copyright exception.

Author: Giulio Coraggio

Italian DPA publishes FAQs on the right to be forgotten for cancer survivors

Following the entry into force of Law No. 193/2023 on the right to be forgotten for cancer survivors, the Italian Data Protection Authority (Italian DPA) has released a set of FAQs to clarify its scope and application.

The right to be forgotten for cancer survivors is an important protection for individuals who have overcome cancer and have clinically recovered. Law No. 193/2023, along with the FAQs published by the Data Protection Authority, clarifies how to prevent discrimination related to past oncological diseases that ended years ago. This legislation protects the rights of those who have been cured, setting limits on banks, insurance companies, and employers from requesting information regarding past oncological conditions which may lead to disadvantageous treatment towards the cancer survivor.

According to the provisions, banks, insurance companies, and employers – both in the public and private sectors – cannot ask for information about oncological diseases from which a person has recovered, provided that treatment ended more than ten years ago (or five years if the illness occurred before the age of 21) and there have been no recurrences. For employers this prohibition applies both during the hiring process and throughout the employment relationship. The goal is to prevent discrimination that could negatively affect the employment or financial conditions of those who have recovered.

Moreover, banks, credit institutions, insurance companies, and financial and insurance intermediaries have to provide clear and adequate information regarding the right to be forgotten for cancer survivors. This obligation includes explicitly mentioning this right in the forms and documents specifically prepared and used for establishing and renewing contracts.

The law also has significant implications for those wishing to adopt a child. The Juvenile Court, responsible for evaluating adoptive couples, cannot collect information on past oncological illnesses if more than ten years have passed since the treatment concluded, or five years for those who had the illness before the age of 21. This principle also applies to international adoptions, ensuring that a past oncological history doesn’t become an unjustified obstacle for those who have recovered.

Finally, the Italian Data Protection Authority clarified that sanctions provided in the GDPR can be issued for breaching the right to be forgotten for cancer survivors.

This legislation represents a significant step toward protecting the privacy and dignity of individuals who have clinically recovered, ensuring that their oncological past doesn’t affect professional, financial, or family opportunities.

Author: Roxana Smeria

 

Intellectual Property

UPC: The first cases before the Milan Central Division

The Milan Central Division was inaugurated on 1 July 2024 and, after just one month, it’s already fully operational.

According to the latest update published on the UPC website at the end of July, two cases have already been brought before the Milan Central Division, namely a revocation action and an application for provisional measures. This brings the total number of cases filed before the Court of First Instance to 447 since the UPC system became operational (1 June 2023).

More details on the parties involved in the two proceedings will be available on the UPC’s Case Management System (CMS) platform in the coming weeks.

Authors: Massimiliano Tiberio, Camila Francesca Crisci

 

Technology Media and Telecommunication

ESMA releases new working paper on DeFi: Categorising smart contracts

The European Securities and Markets Authority (ESMA) has recently published a groundbreaking working paper focused on the rapidly evolving field of decentralized finance (DeFi). Titled “Decentralised Finance: A Categorisation of smart contracts”. The paper delves into the critical role of smart contracts in the DeFi ecosystem. ESMA’s analysis employs advanced techniques such as natural language processing and topic modelling to categorize and understand the diverse functionalities of smart contracts deployed on blockchain networks.

DeFi and smart contracts

DeFi represents a transformative shift from traditional financial systems, replacing conventional intermediaries with automated protocols governed by smart contracts. These self-executing agreements are pivotal in defining the financial landscape of DeFi, where transactions are conducted directly between participants without the need for traditional banks or brokers. Smart contracts not only automate financial interactions but also ensure transparency and reliability through blockchain’s immutable ledger.

Smart contracts on platforms like Ethereum are versatile, serving as the backbone for various financial instruments and protocols within DeFi. They enable the creation of tokens such as stablecoins and governance tokens, facilitate lending and borrowing through decentralized lending protocols, and power decentralized exchanges (DEXs) where users trade digital assets directly. This infrastructure supports innovative decentralized applications (dApps) that offer financial services ranging from lending and borrowing to automated trading and asset management.

Risk to investors and financial stability

While DeFi promises increased accessibility and efficiency, it also introduces unique risks. Smart contracts, once deployed, are immutable and execute autonomously based on predefined code. This eliminates human intervention but also exposes vulnerabilities such as coding errors, operational dependencies, and susceptibility to malicious activities like hacking and fraud. The decentralized nature of DeFi further complicates regulatory oversight and investor protection efforts, posing challenges for authorities like ESMA in ensuring market integrity and financial stability.

Categorizing smart contracts

ESMA’s working paper proposes a methodological framework to categorize smart contracts into distinct groups based on their functionalities and roles in the DeFi ecosystem:

  • Financial contracts: these smart contracts facilitate core financial operations within DeFi, including lending and borrowing protocols, initial coin offerings (ICOs), decentralized autonomous organizations (DAOs), and automated market makers (AMMs) powering decentralized exchanges.
  • Operational contracts: foundational smart contracts that provide essential infrastructure support, such as hosting libraries, optimizing resource allocation, and handling error management across decentralized applications.
  • Token contracts: manage the creation, distribution, and management of tokens adhering to Ethereum’s ERC standards, including fungible tokens (ERC20) and non-fungible tokens (NTFs) used for unique digital assets like collectibles and digital art.
  • Wallet contracts: smart contract designed to enhance user interaction with the blockchain by managing wallet functionalities such as transaction fees, balances, and access controls.
  • Infrastructure contracts: core components that handle data manipulation, encoding and other fundamental operations critical for the functionality and scalability of blockchain applications and protocols.

Methodology and insights

ESMA’s categorization methodology uses natural language processing (NLP) and topic modelling techniques to analyse the source code of smart contracts deployed on Ethereum and other blockchain networks. By clustering these contracts based on shared features and functionalities, ESMA aims to provide insights into the evolving dynamics of DeFi and show potential areas of concern related to investor protection and financial stability.

Potential model enhancements

While the current methodology provides valuable insights, ESMA acknowledges opportunities for further refinement:

  • Dynamic topic modelling: exploring dynamic topic modelling approaches to adapt to the evolving nature of smart contracts and DeFi protocols.
  • Integration with transactional data: incorporating transactional data analysis to understand real-world interactions and network effects within DeFi ecosystems.
  • Regulatory considerations: addressing regulatory challenges posed by decentralized technologies, including governance, transparency, and the protection of investor interests.

ESMA’s working paper marks a significant step towards understanding and regulating this rapidly expanding sector. By categorizing smart contracts into distinct groups, ESMA aims to enhance regulatory oversight, promote market integrity, and mitigate risks associated with decentralized finance.

Author: Alessandra Faranda


Innovation Law Insights is compiled by the professionals at the law firm DLA Piper under the coordination of Edoardo Bardelli, Carolina Battistella, Carlotta Busani, Giorgia Carneri, Maria Rita Cormaci, Camila Crisci, Cristina Criscuoli, Tamara D’Angeli, Chiara D’Onofrio, Federico Maria Di Vizio, Enila Elezi, Alessandra Faranda, Nadia Feola, Laura Gastaldi, Vincenzo Giuffré, Nicola Landolfi, Giacomo Lusardi, Valentina Mazza, Lara Mastrangelo, Maria Chiara Meneghetti, Deborah Paracchini, Maria Vittoria Pessina, Tommaso Ricci, Miriam Romeo, Rebecca Rossi, Roxana Smeria, Massimiliano Tiberio, Giulia Zappaterra.

Articles concerning Telecommunications are curated by Massimo D’Andrea, Flaminia Perna e Matilde Losa.

For further information on the topics covered, please contact the partners Giulio Coraggio, Marco de Morpurgo, Gualtiero Dragotti, Alessandro Ferrari, Roberto Valenti, Elena Varese, Alessandro Boso Caretta, Ginevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer”, the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here. And check out a DLA Piper publication outlining Gambling regulation here, a report analyzing key legal issues arising from the metaverse qui, and a comparative guide to regulations on lootboxes here.

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.