Add a bookmark to get started

12 de setembro de 202414 minute read

Innovation Law Insights

12 September 2024
Journal

Diritto Intelligente – Issue No. 1

We are thrilled to announce the launch of Diritto Intelligente, a groundbreaking monthly journal on AI-related EU laws, cases, and opinions (in English) from DLA Piper's Italian Intellectual Property and Technology practice.

Each month, we'll bring you the most insightful and cutting-edge legal analysis on AI.

We hope you enjoy it! Read the first issue here.

 

Podcast

Inside the EU AI Act: Negotiations and key future legal challenges with Laura Caroli

In this episode of “Diritto al Digitale,” Giulio Coraggio, Location Head of DLA Piper's Italian IPT group, interviews Laura Caroli, a key negotiator of the EU AI Act. Laura takes us through the intricate journey of the first legislation on AI from its early drafts to the final version. You can listen here.

 

Artificial Intelligence

Framework Convention on Artificial Intelligence signed: Is this the first step towards a global legal framework?

The Framework Convention on Artificial Intelligence (Convention), the first legally binding international treaty on AI, gathered its first signatures in Vilnius on 5 September 2024.

The Convention requires signatory states to take legislative, administrative or other measures to regulate the entire lifecycle of AI systems. The main objective is to promote technological innovation while protecting human rights and democratic principles (eg integrity of democratic processes, the rule of law, the independence of the judiciary and access to justice).

General principles

In line with Regulation (EU) 2024/1689 (AI Act), the Convention adopts a risk-based approach and provides a set of principles that should guide the development and use of AI systems. Applying the principles will vary according to the level of risk associated with the specific system. While the AI Act regulates specific AI models, systems or practices, the Convention focuses on individual activities that are part of the lifecycle of systems with risk potential, regardless of the risk posed by the system as a whole. These principles apply whether the AI is developed by a public body or by a private company. These core principles include:

  • Transparency and oversight
  • Accountability and responsibility
  • Equality and non-discrimination
  • Privacy and data protection
  • Reliability and accuracy

Specific obligations

The Convention also outlines some more specific obligations for state parties, including:

  • ensuring effective remedies in the event of harm caused by AI;
  • ensuring that public authorities and affected individuals are fully informed about the use and operation of AI systems;
  • adopting measures to identify, assess, prevent and mitigate risks posed by AI systems;
  • promoting public debate or multi-stakeholder consultation on key AI issues;
  • promoting an appropriate culture of AI and responsible use of digital tools among the population; and
  • establishing effective oversight systems to monitor the implementation of the Convention (eg independent oversight authorities).

The Conference of the Parties

Finally, Article 23 of the Convention provides for the establishment of a "Conference of the Parties" (Conference), a collegial body comprising representatives of the signatory states. The purpose of the Conference is to facilitate the application and implementation of the Convention, to consider possible amendments or additions to the Con¬vention, to resolve any questions of interpretation and to facilitate the amicable settlement of any disputes concerning the application of the Convention. In addition, each party to the Convention has to submit a periodic report to the Conference on the activities undertaken to implement the Convention. In general, the Conference is the main instrument for implementing international cooperation, a central principle of the Convention.

Conclusions

The Convention is an important step towards creating a global legal framework for AI that will stimulate and promote its safe use beyond Europe's borders. The AI Act, as the first major piece of legislation on the subject, will play a central role in this implementation activity and it is expected that many provisions will be based on it. It remains to be seen how states will put these principles into practice, with the hope that more countries will join the Convention and contribute to safe and reliable use of AI on a global scale.

Author: Federico Toscani

 

Data Protection and Cybersecurity

Dutch DPA fines Clearview AI for unlawfully processing biometric data

The Dutch Data Protection Authority (Dutch DPA) has imposed a substantial fine of EUR30.5 million on Clearview AI for several breaches of the GDPR.

This decision is consistent with similar actions taken by other European Data Protection Authorities, including the Italian Data Protection Authority.

Background

Clearview AI is a commercial company that provides facial recognition services primarily to intelligence and investigative agencies. The company operates a massive database containing over 30 billion facial images, which it acquires through automated scraping of publicly available content on the internet. These images are then processed into unique biometric codes without the knowledge or consent of the individuals featured in them.

Violations

The Dutch DPA’s investigation revealed several significant violations of data protection laws:

  • Legitimate interest: Clearview AI argued that it was processing personal data based on a legitimate interest. But the Dutch DPA found that Clearview AI didn't adequately articulate a legitimate interest that is protected by law and sufficiently specific. Additionally, the Dutch DPA concluded that Clearview's processing lacked necessity and that the fundamental rights and interests of data subjects outweighed Clearview's commercial interests.
  • Processing of biometric data: The Dutch DPA found that Clearview AI was processing biometric data of individuals located in the Netherlands without a lawful basis under Article 9(2) of the GDPR. The only potential exception under Article 9 would be data that's manifestly made public by the data subject. But, according to the Dutch DPA, the mere fact that data is found online doesn't imply that individuals intended to make all their data publicly accessible, nor did they provide clear and affirmative consent.
  • Transparency: Clearview AI was found to be deficient in providing clear and comprehensive information to data subjects about the processing of their personal data. This includes not disclosing the legal basis for processing, retention periods, and categories of data recipients. The Dutch DPA highlighted that data subjects were not adequately informed about how their photos (including metadata) might be used for facial recognition purposes. Clearview’s reliance on a privacy statement on their website was deemed insufficient, as it didn't actively notify individuals about the collection and use of their data.
  • Finally, the investigation also revealed that Clearview AI failed to facilitate data subjects' rights to access their personal data and didn't designate a representative in the EU.

    The Dutch DPA has also issued a warning against using Clearview AI's services, stating that such use is prohibited. The DPA noted that Clearview AI didn't cease its unlawful practices even after the investigation began. If Clearview continues to violate data protection regulations, it could face additional fines of up to EUR5.1 million.

    Conclusions

    The Dutch DPA's decision is in line with actions taken by other European authorities. Similar rulings have been made by the Italian DPA, the French DPA, the ICO, and the Hellenic DPA. This reflects a broader trend among European data protection authorities to scrutinize services involving AI and their compliance with GDPR standards. There's an increasing focus on ensuring that such services adhere to key principles of the GDPR, particularly regarding transparency, legal basis for processing, data retention practices, and data minimization.

    Author: Roxana Smeria

     

    Intellectual Property

    New request for a preliminary ruling to the CJEU: When is a trademark deceptive?

    In a case involving the Fauré Le Page trademark, the French Cour de Cassation has sought guidance from the Court of Justice of the European Union (CJEU), requesting a preliminary ruling on the concept of deceptiveness in trademarks (Case C-412/24). This case revolves around whether a trademark that includes a reference to a date, which falsely suggests a long history and established reputation of the brand, can be considered deceptive even if it doesn't mislead consumers about the goods or services themselves.

    Background

    The dispute began when Goyard ST-Honoré sought to cancel the FAURÉ LE PAGE PARIS 1717 trademarks, arguing that the element "1717" in the trademark falsely implied that the company had been in business since that year, when in reality, the current owner had only existed since 2009. The Paris Court of Appeal agreed, ruling that the trademarks were invalid as the date "1717" was likely to mislead the public into believing that Fauré Le Page Paris had a long-standing history dating back to the 18th century. This judgment was significant because it introduced the idea that a trademark's deceptiveness could extend beyond the nature, quality, or geographical origin of the goods or services, to include the characteristics of the company itself.

    Fauré Le Page Paris appealed to the Cour de Cassation, arguing that the Trademark Directive should only apply to trademarks that deceive consumers about the goods and services offered, not the company’s history or reputation. According to Article 20(b) of the Trademark Directive, a trademark can be revoked if, after the registration date, as a result of the use made by its owner or with its owner consent, it's liable to mislead the public, particularly as to the nature, quality or geographical origin of those goods or services.

    However, the Cour de Cassation considered that a trademark could be deceptive if it conveyed false information about the company’s age or reliability, leading consumers to attribute undeserved prestige or quality to the products.

    Consequently, the Cour de Cassation referred two main questions to the CJEU: whether including a fanciful date in a trademark, which misleads consumers about the company's age and expertise, can be deemed deceptive under the Trademark Directive, and if so, whether this deceptiveness must specifically relate to the goods and services or could extend to the company's characteristics.

    In any case, the question that arises is: if the geographical origin of the goods and services is regarded as a characteristic of the goods or services, then why not the date of establishment of the manufacturer of the goods or services? Both are non-tangible properties, but they can have an important impact on the (perceived) quality of the goods and services and on the purchasing decision of consumers.

    This case underscores the complexities surrounding the concept of deceptiveness in trademark law, and above all calls for vigilance on the part of companies that relaunch old trademarks by registering trademarks including an old date, when the goodwill has not been transferred. The CJEU's forthcoming decision could have significant implications for how trademarks with historical references are treated across the EU.

    Author: Maria Vittoria Pessina

    US Copied Act Proposal for the Protection of AI-Generated Content: New Era of Transparency?

    In the US, on 11 July 2024, three senators – Maria Cantwell, Chair of the Senate Commerce Committee, Marsha Blackburn, and Martin Heinrich – introduced a legislative proposal known as the Copied Act (Content Origin Protection and Integrity from Edited and Deepfaked Media Act). The bill aims to regulate the use of AI-generated content and protect copyright, promoting transparency and traceability for synthetic content, including that generated or modified by algorithms, particularly AI-based systems.

    The US Congress, through this legislative proposal, acknowledges several challenges related to the use of AI:

    • Lack of visibility into how AI systems function.
    • Limited transparency regarding the data used to train such systems.
    • Absence of shared standards and practices for developing and deploying AI.

    The Copied Act defines AI by the National AI Initiative Act 2020. Additionally, it introduces the concept of "synthetic content," referring to information – such as images, videos, audio, and text – generated by algorithms. "Covered content" refers to any digital representation protected by copyright, as Section 102 of Title 17 of the US Code outlines.

    The law entrusts three key government agencies – the National Institute of Standards and Technology (NIST), the US Patent and Trademark Office (USPTO), and the US Copyright Office (USCO) – with the task of developing guidelines for consensus-based standards. These standards are intended to ensure the traceability and transparency of synthetic content through specific measures.

    • A requirement to identify the origin of AI-generated or AI-modified content by applying a watermark.
    • A requirement for creators and users of AI technologies to enable information regarding the content's provenance.
    • The Act mandates the implementation of these measures within a strict two-year timeline from the law’s enactment, underscoring the urgency and significance of the issue.
    • A prohibition on removing, altering, or turning off information regarding the origin of synthetic content, except for specific cases such as research or security.
    • Enhanced control and transparency in the use of AI-generated content.

    The Copied Act also prohibits using copyrighted works to train AI systems or to create new synthetic content without the explicit and informed consent of the copyright holders. This approach would ensure that authors are aware of how their works are being used and have the right to establish the conditions of use, including compensation.

    The Copied Act grants the Federal Trade Commission (FTC) the authority to enforce the bill's provisions under the regulations established by the Federal Trade Commission Act. State Attorneys General are also empowered to take legal action to enforce the prohibitions outlined in the law.

    Another key aspect of the Copied Act is the regulation of deepfakes – audio or video content manipulated through AI to create deceptive representations. The law mandates that NIST, in a collaborative effort with USPTO and USCO, promote awareness campaigns to educate the public about the risks and challenges posed by AI-manipulated content. This collaboration ensures a comprehensive and thorough education campaign.

    This issue is also reflected in Europe, where the AI Regulation enacted on 1 August 2024, requires clear disclosure when content has been created or modified using AI. This transparency requirement protects freedom of expression in creative and satirical contexts without compromising copyright rights.

    The Copied Act is a significant step for the US, aligning itself with the global trend toward regulating AI. This legislation, which aims to protect both content creators and consumers, is a crucial development following the European regulatory experience. It represents a crucial step toward ensuring a responsible, transparent, and secure use of AI, especially in a context where digital content and its alterations are becoming increasingly sophisticated.

    Author: Rebecca Rossi


    Innovation Law Insights is compiled by the professionals at the law firm DLA Piper under the coordination of Arianna Angilletta, Matteo Antonelli, Edoardo Bardelli, Carolina Battistella, Carlotta Busani, Giorgia Carneri, Noemi Canova, Maria Rita Cormaci, Camila Crisci, Cristina Criscuoli, Tamara D’Angeli, Chiara D’Onofrio, Federico Maria Di Vizio, Enila Elezi, Nadia Feola, Laura Gastaldi, Vincenzo Giuffré, Nicola Landolfi, Giacomo Lusardi, Valentina Mazza, Lara Mastrangelo, Maria Chiara Meneghetti, Deborah Paracchini, Maria Vittoria Pessina, Tommaso Ricci, Rebecca Rossi, Roxana Smeria, Massimiliano Tiberio, Federico Toscani, Giulia Zappaterra.

    Articles concerning Telecommunications are curated by Massimo D’Andrea, Flaminia Perna e Matilde Losa.

    For further information on the topics covered, please contact the partners Giulio Coraggio, Marco de Morpurgo, Gualtiero Dragotti, Alessandro Ferrari, Roberto Valenti, Elena Varese, Alessandro Boso Caretta, Ginevra Righini.

    Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

    You can learn more about “Transfer,” the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as a report analyzing key legal issues arising from the metaverse qui, and a comparative guide to regulations on lootboxes here.

    If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.

    Print