undefined

Add a bookmark to get started

Global Site
Africa
MoroccoEnglish
South AfricaEnglish
Asia Pacific
AustraliaEnglish
Hong Kong SAR ChinaEnglish简体中文
KoreaEnglish
New ZealandEnglish
SingaporeEnglish
ThailandEnglish
Europe
BelgiumEnglish
Czech RepublicEnglish
HungaryEnglish
IrelandEnglish
LuxembourgEnglish
NetherlandsEnglish
PolandEnglish
PortugalEnglish
RomaniaEnglish
Slovak RepublicEnglish
United KingdomEnglish
Middle East
BahrainEnglish
QatarEnglish
North America
Puerto RicoEnglish
United StatesEnglish
OtherForMigration
27 March 202524 minute read

Innovation Law Insights

27 March 2025

AI Law Journal

Diritto Intelligente – March issue available now

The March issue of the AI Law journal published by DLA Piper's Italian Intellectual Property and Technology team is now available with the latest updates on legal challenges of artificial intelligence law. Read it here.

 

Podcast

 
AI and Copyright – US Fair Use v EU TDM after the Delaware decision

We explore the landmark Delaware Court decision in Thomson Reuters v Ross Intelligence which analyzes the implications for AI training data practices under the US fair use copyright exception. And we address how the outcome might differ under the EU’s Text and Data Mining (TDM) exceptions.

Explore this topic in the latest episode of the Diritto al Digitale podcast, featuring Giulio Coraggio and Valentina Mazza from DLA Piper's Intellectual Property and Technology practice. Listen here.

 

Artificial Intelligence

 
Italian Senate approves draft bill on AI: Latest changes and persistent criticalities

On 20 March 2025, the Italian Senate approved the Artificial Intelligence Bill (the Italian AI Bill), the measure aimed harmonising national legislation with the provisions of Regulation (EU) 2024/1689 (AI Act), to clarify the evolutionary context of AI in Italy.

The Italian AI Bill, like the AI Act, focuses on the transparent, responsible and rights-compliant development and use of AI systems in different sectors of society. It aims to define the basis for developing a national AI strategy to increase the country's strategic competitiveness. It introduces specific provisions to regulate the transparent and safe use of AI in:

  • health and scientific research
  • world of work and intellectual professions
  • judicial activity
  • public administration
  • national security

Below are the most relevant sector specific provisions of the Italian AI Bill.

 

Using AI in critical sectors

Healthcare

The Italian AI Bill recognises the potential of AI in the medical sector, but at the same time regulates its use to ensure ethical and safe use. To this end, it lays down some basic rules, including a ban on the use of AI systems to select and condition access to healthcare services; the obligation to inform patients about the use of AI technologies and to measure performance to minimise the risk of errors.

The Italian AI Bill provides that AI systems must be used as a mere support to the prevention, diagnosis and treatment activities performed by medical professionals. The responsibility for the final decision remains exclusively with the physician, who must always monitor the proper functioning of the AI and check the outputs.

Scientific research

Art. 8 of the Italian AI Bill provides that research activities aimed at implementing AI systems, when carried out by public and private nonprofit entities or IRCCSs (Institutes for Hospitalization and Treatment of Scientific Character), are declared to be of significant public interest. As such, this research may benefit from the processing of personal data even without the consent of the data subjects, in accordance with the conditions set out in Article 9 GDPR and 110 of the Italian Privacy Code. However, the lawfulness of any processing remains subject to the approval of the relevant ethics committees and prior notification to the Italian Data Protection Authority.

Working environment

Article 11 of the Italian AI Bill emphasises the importance of respect for human dignity, transparency and the prohibition of discrimination in the use of AI in the labour sector, in line with the provisions of the legislation already in force. The Italian AI Bill specifies the obligation to adequately inform workers on the use of AI systems, placing itself in continuity with privacy and labour law regulations and with the provisions on the remote control of workers, which require the activation of safeguard mechanisms provided by the Workers' Statute.

Finally, to ensure continuous monitoring of the impact of AI on the world of work, the Observatory on the Adoption of Artificial Intelligence Systems has been established. It's task is to develop regulatory strategies, assess the effects of AI on the labour market and identify the sectors most affected by this technological transformation.

Intellectual professions

As already provided for the medical professions, Article 13 of the Italian AI Bill provides that AI systems can only be used as support tools for professional activity, without ever replacing the professional's intellectual contribution. Consequently, the professional can't entrust the entire provision of their intellectual service to an AI system, even if the recipient of the service gives their consent. This provision raises some critical issues in its application, due to the difficult distinction between a merely auxiliary use of AI and a use that's prevalent over human input.

The AI Act then provides that the professional must always inform the recipient of the service in a clear, simple and comprehensive manner about the use of AI systems. Although the provision only refers to information about the technology used, the obligation introduced goes beyond what's provided for in the AI Act, which doesn't provide for any limitations for AI systems that aren't high-risk. Moreover, the AI Act doesn't specify what the contractual consequences are if a professional uses AI systems without declaring this in advance.

Justice and public administration

Articles 14 and 15 of the Italian AI Bill regulate the use of AI in public administration and judicial activity. In both areas, the bill emphasises two fundamental aspects:

  • AI should only have a supporting role, without replacing the human operator's assessment and decision-making.
  • The public administration is committed to promoting the training and development of the digital skills of professionals in the field, so they can use AI in a conscious and responsible manner.

It's also important to note that, in Article 6 dedicated to the Artificial Intelligence Strategy, the AI Italian AI Bill introduces a specific provision for the use of AI by public administrations, to guarantee the sovereignty and security of citizens' sensitive data. This provision stipulates that AI systems intended for use in the public sector, with the exception of those used abroad in the context of military operations, must be installed on servers located in national territory.

 

Identification of Italian AI authorities

The Italian AI Bill identifies as competent national authorities for AI, also under the EU AI Act:

  • The Agency for Digital Italy (AgID), which will take on the role of notifying authority with functions of accreditation and monitoring of entities in charge of verifying the compliance of AI systems. AgID will also be responsible for promoting innovation and development in AI.
  • The Agency for National Cybersecurity (ACN): identified as the supervisory authority, as well as the lead authority for the use of AI for cybersecurity. In the public management of AI, the Bank of Italy, CONSOB and IVASS maintain a sectoral supervisory role for the credit, financial and insurance sectors.

The singling out of AgID and ACN – being government authorities – seems to overlook what the European Commission pointed out in its opinion (C(2024) 7814), where it recalled that the authorities must have the same level of independence as provided for in Directive (EU) 2016/680 for data protection authorities in law enforcement, migration management and border control, administration of justice and democratic processes.

AI and copyright

In Article 25, the Italian AI Bill regulates the protection of copyright with regard to works generated with the help of AI, clarifying that these are also protected by copyright, provided that their creation derives from the author's intellectual work. In line with what's already provided for in Sections 70-ter and 70-quater of the Copyright Act, reproduction and extraction from works or other materials contained on networks or in databases to which one has legitimate access, carried out through the use of AI models and systems, including generative ones, is also permitted.

Instead, the provision that content produced by AI systems should be made clearly recognisable by a watermark with the acronym “AI” was deleted, in line with the European Commission's detailed opinion. This provision went beyond the requirements of Section 50(2) and (4) of the AI Act.

Amendments to the Italian Civil and Criminal Code

The Italian AI Bill, in Art. 17, intervenes with an amendment to Art. 9 of the Code of Civil Procedure to attribute to the exclusive jurisdiction of the ordinary court all “cases concerning the operation of an artificial intelligence system,” preventing in the first place the possibility of filing lawsuits related to AI before the small claims court (giudice di pace).

Finally, Article 26, dedicated to criminal law, establishes a new aggravating circumstances related to the use of AI, meaning that when committing criminal offences, the perpetrators could be sentenced to more severe penalties if the crime was committed “through the use of artificial intelligence systems, when these, due to their nature or method of use, constituted an deceitful means, or when their use has in any case hindered public or private defense, or aggravated the consequences of the crime.”

If the use of AI is linked to the commission of crimes against the political rights of citizens (article 294 of the Italian Criminal Code), the penalty of imprisonment increases, ranging from two to six years. Furthermore, the Italian AI Bill creates a new type of offense, aimed at punishing the sharing of deepfakes without the consent of the person portrayed, when this causes them undue damage.

Conclusions

The text of the Italian AI Bill approved by the Italian Senate now passes to the Chamber of Deputies for examination. Once an agreement has been reached on the final text, the government will have to complete the Italian legislation on AI. The Italian AI Bill states the government has to draft “one or more legislative decrees to define an organic discipline relating to the use of data, algorithms and mathematical methods for the training of artificial intelligence systems” (Art. 16).

According to Article 24 of the AI Italian AI Bill, the government also has to adopt, within 12 months from the law entering into force, legislative decrees for adapting national legislation to the AI Act. These decrees will have to include definitions of the supervisory, inspection and sanctioning powers of ACN and AgID, and measures for updating the regulations in force on banking, financial, insurance and payment services. The legislative delegation also includes the definition of rules on civil liability for damages resulting from the use of AI.

The only partial coverage of the sectors affected by AI, in addition to the persistent incompatibility of some of the provisions with European legislation, makes the approval of the AI Italian AI Bill a missed target. Almost eight months after the AI Act came into force, Italy's strategy on AI still remains unclear. If it takes the Chamber of Deputies a few months to scrutinise the text, we’ll have to wait much longer to get to the approval of the government-initiated legislative decrees, which should finally lead to the definition of a clearer and more defined landscape for AI.

For more information of the legal issues of AI, we recommend reading our AI law journal. Check the latest editions here.

Authors: Marianna Riedo, Federico Toscani

 

Third Draft of the General-Purpose AI Code of Practice: Key updates and implications for generative AI stakeholders

The third draft of General-Purpose AI Code of Practice has been released, introducing significant refinements to enhance AI governance and compliance. Below, I outline the main changes and the potential impact for businesses.

Notable updates of the Third Draft General Purpose AI Code of Practice

Here are the main changes:

  • Streamlined structure: The Code now presents a concise set of high-level commitments, each accompanied by detailed implementation measures, ensuring clarity and practicality.
  • Transparency and copyright commitments: All providers of general-purpose AI models are subject to two primary commitments:
    • Transparency: A user-friendly Model Documentation Form has been introduced to facilitate comprehensive and accessible model information. Some open-source model providers may be exempt from specific transparency obligations, aligning with the AI Act.
    • Copyright: The Code delineates clear measures to uphold copyright standards, ensuring that AI models respect intellectual property rights.
  • Safety and security measures: Providers of advanced AI models identified as posing systemic risks are now subject to 16 additional commitments. These encompass:
    • Systemic risk assessment: Mandatory evaluations to identify and mitigate potential widespread impacts.
    • Incident reporting: Structured protocols for reporting and addressing adverse events.
    • Cybersecurity obligations: Enhanced measures to protect AI systems from malicious threats.

Implications for providers and deployers of generative AI systems

Here are the main impacts deriving from the new version of the Third Draft General Purpose AI Code of Practice:

  • Enhanced compliance requirements: Providers have to adhere to detailed transparency and copyright obligations, necessitating robust documentation and data management practices.
  • Risk management: Deployers of high-capability generative AI systems have to implement comprehensive risk assessment and mitigation strategies to align with the new safety and security commitments.
  • Adaptation to evolving standards: The Code emphasizes flexibility to accommodate technological advancements, urging stakeholders to stay informed and agile in their compliance efforts.

What next steps and open questions?

Stakeholders are invited to provide feedback by 30 March 2025 and the final version of the Code is expected in May 2025. In my view, the potential questions are:

  • Will the Code of Practice become a guide to interpret the provisions of the AI Act?
  • Will the Code of Practice become a more flexible and adjustable guide to adapt to quick technological developments?
  • Will courts and regulators look at the Code of Practice in enforcing the AI Act?
  • Are de facto businesses obliged to comply with the Code of Practice?

For more information of the legal issues of AI, we recommend reading our AI law journal. Check the latest editions here.

Author: Giulio Coraggio

 

Intellectual Property

 
Copyright law and AI: US case law excludes machines from authorship

On 18 March 2025, the US Court of Appeals for the District of Columbia Circuit upheld previous rulings denying copyright protection to the work A Recent Entrance to Paradise, allegedly created exclusively by the “Creativity Machine,” an AI system developed by Dr. Stephen Thaler.

The case dates back to 2019 when Thaler filed a copyright registration request with the US Copyright Office, listing the Creativity Machine as the sole author and himself as the rights holder. The Office rejected the application. This decision was later upheld by the Review Board and the US District Court for the District of Columbia.

During the proceedings, Thaler argued that, under the work-for-hire doctrine, he should be entitled to the copyright. He also claimed that the work should still be protected as it was created under his direction and control.

With its latest decision, the Court of Appeals reaffirmed that US copyright law (Copyright Act of 1976) requires every protected work to be created by a human being. While the law doesn’t explicitly define the term “author,” established interpretations attribute this status exclusively to natural persons.

Although the court acknowledged that some legally recognized authors in the past didn’t fully meet all these criteria, it ruled that an AI cannot, under any circumstances, possess legal capacity or creative intent.

The role of AI in creating copyright-protected works

A key aspect of the ruling is the clarification that the requirement of human authorship doesn’t exclude protection for works created with AI assistance. If a human uses AI tools to develop a work, the result can be protected, provided there’s a sufficient level of human intervention. The issue in Thaler’s case was that the Creativity Machine was listed as the sole author of the work.

The court also dismissed the argument that the human authorship requirement discourages creativity among those who develop or use AI.

The debate on authorship in the age of AI

For many, this decision is unsurprising and aligns with long-established legal principles. The real limitation of Thaler’s case is not just the rejection of his request but rather the fact that, over the years, his arguments have lost relevance in the current debate.

There are two main reasons for this: first, he listed the Creativity Machine as the sole author of the work; second, some of his arguments – such as the alleged unconstitutionality of the human authorship requirement and the idea that he himself could be recognized as the author – were not further developed.

Today, the key question is not so much who can be considered an “author” but rather what defines an author. The debate revolves around how much AI integration is permissible in the creative process before human authorship is lost.

In the US, the recent Second Report on Copyright and Artificial Intelligence by the Copyright Office clarified that using AI tools doesn’t exclude copyright protection, as long as there’s meaningful human control over expressive elements. This principle is not a shift in policy but aligns with the existing Compendium of US Copyright Office Practices and the 1965 Report to the Librarian of Congress, which stated that works generated solely by mechanical or random processes, without any human creative input, cannot be registered.

In Europe, the situation is quite similar. US copyright principles largely reflect those in the European legal framework. The Court of Justice of the European Union has long held that a work is protected only if it reflects the author’s “personality” and is the result of “free and creative choices.”

So, under EU law, AI can be used as a creative tool, but protection applies only to the parts of a work where human contribution is clearly identifiable.

Ultimately, the debate on AI in creative fields is far from over. While it now seems well established that only a human can be recognized as an author, the real question remains: how much human intervention is required for a work to be eligible for copyright protection? This is the broader and more complex issue that copyright law will need to address in the coming years.

For more information of the legal issues of AI, we recommend reading our AI law journal here.

Author: Maria Vittoria Pessina

 

Search engines under the lens: EUIPO Report unveils new perspectives for intellectual property protection

In an era where the web is the main channel for accessing information, EUIPO’s recent report “Search Engines - Challenges and good practices to limit search traffic towards intellectual property infringing content and services” analyses in detail how search engines can become tools for disseminating infringing content, raising complex legal and operational challenges.

An overview of the role of search engines

The report highlights how search engines, through their “organic” and “paid for” results, are the main gateway for users. While organic results are based on crawling, indexing and ranking algorithms, paid results are derived from advertising agreements. This dual function, while crucial for content discovery, can be manipulated by actors exploiting abusive SEO and SEM techniques to direct traffic to sites offering counterfeit products or pirated content.

Technical insights: "Organic" v "Paid for" results

The report takes an in-depth look at how search engines operate:

  • “Organic” results: these are generated automatically through complex indexing processes. The use of sophisticated algorithms makes it possible to determine the relevance and quality of content, but makes it difficult to counter illicit practices, such as the intensive use of keywords or camouflage techniques to mask counterfeit content.
  • “Paid for” results: based on advertising models and contractual agreements, these results offer advertisers the possibility to position themselves in privileged spaces. But, even in this case, IP infringers may use deceptive marketing strategies, inserting IPR-relevant keywords and misleading the user.

Legal challenges and dilemmas

Some of the main critical issues that emerged in the report include:

  • Identifying infringing content: rights holders struggle to identify and counter sites that, through manipulative SEO/SEM techniques, rank high in SERPs. The presence of mirrors and “disguised” versions of original sites further exacerbates the problem.
  • Regulatory developments: with the introduction of regulations such as the Digital Services Act, the regulatory framework is becoming increasingly complex, requiring digital operators to implement specific measures to prevent the dissemination of illegal content.

Good practices and technological innovations

The report proposes a set of best practices, divided into preventive and reactive measures, that search engines can adopt:

  • Preventive measures:
  • Adopting clear internal policies prohibiting the misuse of services for the promotion of infringing IP content.
  • Blocking terms related to IP violations in auto-completion functions, reducing the risk of directing users to illegal content.
  • Voluntarily demanning and de-indexing contested sites on the basis of judicial or administrative decisions.
  • Reactive measures:
  • Implementing “Notice and Action” systems that facilitate the reporting and removal of disputed content, thanks also to flagging mechanisms by users and rights holders.
  • Publishing transparency reports documenting the number of reports received and actions taken.
  • Technological innovations:

Integrating AI into the indexing process and the analysis of notifications is a major step forward. These tools make it possible to refine the criteria for detecting abusive practices and improve the effectiveness of interventions, although they introduce new challenges in terms of transparency and accountability.

Future perspectives and implications

The EUIPO report not only diagnoses current critical issues, but also offers a forward-looking look at developments in the sector. Highlights include:

  • The evolution of customisation: the increasing use of customised algorithms makes it difficult to obtain uniform results, further complicating the identification of illegal content.
  • New search architectures: the adoption of distributed search models and the emergence of search engines for the dark web may radically change the landscape of content discovery, introducing further challenges for the regulation and protection of IP rights.
  • Impact of AI: although AI promises to improve effectiveness in detecting malpractices, its use raises questions about transparency, algorithmic bias and control of automated decisions.

Conclusions

The EUIPO report helps understand the complex dynamics that characterise the search engine ecosystem and outlines the strategies needed to protect intellectual property in an ever-changing digital environment. The articulate analysis highlights how collaboration between rights holders, digital platforms and regulators is indispensable to ensure a balance between freedom of access to information and the need to fight illicit practices. In a world where technological innovation is proceeding at a dizzying pace, dialogue between regulation and technology is the key to a safer and fairer digital future.

Author: Maria Rita Cormaci

 

Technology Media and Telecommunication

 
Public consultation on the applicability conditions of the general authorisation regime to Content Delivery Networks

With Resolution No. 55/25/CONS of 6 March 2025, published on 14 March, the Italian Communications Authority (AGCom) launched a public consultation regarding a document examining the applicability conditions of the general authorization regime. The regime is provided for by the Electronic Communications Code (Legislative Decree No. 259/2003, as amended by Legislative Decree No. 207/2021 – ECC) to Content Delivery Networks.

Content Delivery Networks or CDNs are networks comprising a set of geographically distributed servers (also known as caches) aimed at accelerating and optimizing the delivery of content to end users. As observed by AGCom, this type of network ensures that requests for specific content are served by the server closest to the user and in shorter times compared to when they’re managed by the content provider's central hub. Generally speaking, as the Authority noted, sending content via CDNs allows for an improved service quality by speeding up content access. And it prevents network backbone congestion caused by the simultaneous transmission of internet traffic, even when unrelated to that delivered through CDNs.

The public consultation forms part of the investigatory procedure initiated with Resolution No.  55/25/CONS. It’s aimed at reviewing the conditions of applicability of the general authorization regime for the provision, ownership, management, or control of a CDN infrastructure in national territory for the distribution of content via the internet. The goal is to promote a coherent authorization regime for all CDN Providers and Content and Application Providers (CAPs ie content providers).

The consultation document consists of three sections:

  • an introductory section;
  • a section on AGCom’s guidelines regarding the applicability conditions of the general authorization regime provided for by the ECC to CDNs; and
  • questions addressed to consultation participants.

The introductory section first describes the operation of CDNs (as outlined above) and identifies the main CDN categories, broadly classified into:

  • private CDNs, where the content provider has complete control over the CDN network servers, used exclusively for distributing their content;
  • public CDNs (also known as global CDNs), where the server network is managed by a specialized CDN provider (different from the CAP) offering content distribution services to multiple Content Providers, who neither control the servers nor share their transmission capacity; and
  • mixed-use CDNs, where a content provider can use their distribution infrastructure both for their content and to provide CDN services to third parties.

The first section further explores the two main traffic distribution methods via CDNs – namely, through interconnection between the CAP’s network and the operator's network, or by installing the CAP CDN caches directly in the operator's network – as well as the main characteristics of commercial agreements among electronic communication operators, Content and Application Providers, and CDN Providers.

The second section of the consultation document focuses on AGCom's guidelines concerning the applicability of the general authorization regime provided for by the ECC to CDNs. AGCom observes that the general authorization regime under Article 11 of the ECC should also extend to all Content and Application Providers that own, manage, or control a CDN within Italy for distributing their content, and to CDN Providers whose infrastructure is located within the national territory. This consideration stems from the fact that such entities essentially operate as network providers by contributing to the transmission of publicly accessible data at an infrastructural level.

The document concludes with four questions for consultation participants, asking them:

  • whether they “agree with the preliminary review on CDN functionality principles, CDN types, traffic distribution methods via CDNs, and commercial agreements among operators, Content and Application Providers, and CDN Providers”;
  • whether they “agree that a CDN provider, within the provision of services through their CDN infrastructure installed nationally, should fall under the general authorization regime outlined in Article 11 of the Code”;
  • whether they “agree that a content provider, who owns, manages, or controls a CDN infrastructure on national territory for distributing their content, should fall under the general authorization regime outlined in Article 11 of the Code”;
  • to provide “any additional considerations on the subject, where deemed necessary.”

Interested parties can submit their observations on the consultation document and any other relevant aspects on the topic by 13 April 2025.

Authors: Massimo D'Andrea, Flaminia Perna, Matilde Losa

 


Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo BardelliCarolina BattistellaCarlotta BusaniGiorgia Carneri, Noemi Canova, Gabriele Cattaneo, Maria Rita CormaciCamila CrisciCristina CriscuoliTamara D’AngeliChiara D’OnofrioFederico Maria Di VizioNadia FeolaLaura GastaldiVincenzo GiuffréNicola LandolfiGiacomo LusardiValentina MazzaLara MastrangeloMaria Chiara MeneghettiDeborah ParacchiniMaria Vittoria Pessina, Marianna Riedo, Tommaso RicciRebecca RossiRoxana SmeriaMassimiliano Tiberio, Federico Toscani, Giulia Zappaterra.

Articles concerning Telecommunications are curated by Massimo D’AndreaFlaminia Perna, Matilde Losa and Arianna Porretti.

For further information on the topics covered, please contact the partners Giulio CoraggioMarco de MorpurgoGualtiero DragottiAlessandro FerrariRoberto ValentiElena VareseAlessandro Boso CarettaGinevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer,” the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as Diritto Intelligente, a monthly magazine dedicated to AI, here.

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.