Add a bookmark to get started

Lights
26 September 202419 minute read

Innovation Law Insights

26 September 2024
Podcast

Who is the Provider under the EU AI Act?

The EU AI Act sets out stringent obligations and responsibilities for AI providers, making it imperative for companies to grasp these requirements to mitigate risks and drive innovation. In the latest episode of Diritto al Digitale’s “Legal Break” series, DLA Piper IPT lawyer Tommaso Ricci delves into the key obligations AI providers have to follow under the EU AI Act and offers expert insight into navigating the new regulatory environment. You can watch here.

 

Artificial Intelligence

Complementary Impact Assessment on the proposed AI Liability Directive published: the possible changes

On 19 September 2024, the Complementary Impact Assessment (the Study) on the proposed directive on adapting non-contractual civil liability rules to AI (AILD) was published. Commissioned by the Committee on Legal Affairs of the European Parliament, the Study aims to identify possible gaps and problems in the proposed legislation. It also aims to respond to alleged incompleteness in the European Commission’s impact assessment.

The AILD, along with the revision of the Product Liability Directive (PLD), is the main instrument to address liability arising from the use of AI. The AILD aims to harmonise the procedural aspects of AI-related claims brought before the courts of the member states. A critical point is the simplification of the burden of proof, which is particularly complex due to the opacity of AI systems (the “Black Box” problem). To address this challenge, the legislation provides, in specific cases, for the right of the injured party to obtain disclosureof evidence and documents relevant to the understanding of the functioning of the system, as well as a presumption of causation in the case of damage resulting from a use of AI not compliant with the provisions of the AI Act.

Interaction between AILD, PLD and the AI Act

The Study examines how the AILD interacts with other regulatory instruments on product liability and AI.  In particular, it recommends:

  • aligning key definitions to ensure consistency of terminology between the AILD and the AI Act to avoid ambiguity in interpretation; and
  • ensuring the application of the AILD to those cases (eg discrimination, personal rights, damage caused by non-professional users) that fall outside the scope of the PLD.

Scope of application

The Study also suggests that the scope of the AILD should be extended beyond high-risk AI systems, to include systems that are defined as “high impact”. This could include General Purpose AI (eg ChatGPT) and software that, although not properly classified as AI systems, present similar problem of transparency and opacity as “pure” AI systems, transforming the AILD into what is defined in the Study as a “software liability instrument”. This approach is reasonable, since in the presence of the same challenges to the traditional liability system, it would not make sense to make a distinction based on a mere technological difference, since the same rules would have to be applied to all those systems that, regardless of their qualification, pose the aforementioned problems of opacity and transparency.

Strict liability and negligence

The Study highlights the consequences of classifying liability as strict and negligent.

With regard to strict liability, which was originally envisaged by the European Parliament Resolution of 2020 for high-risk AI systems, it is confirmed as a possible solution in the case of prohibited/high-risk systems, as the protection of the public takes precedence over the stifling effect it would have on innovation. At the same time, it emphasizes the difference between “legitimate-harm models”, which could have an adverse effect on a subject even if correctly used (eg scoring systems), and “illegitimate-harm models”, which could under no circumstances cause harm if used correctly and calls for a strict liability regime only for the latter.

On the other hand, regarding negligent liability and the AIDL’s systems for reducing the burden of proof, the Study emphasizes that:

  • the duty of disclosure may be of little practical use given the highly technical nature of the documents covered by it. It’s also unclear how the requirement that the plaintiff must provide evidence to prove the plausibility of its claim will be addressed, or whether the presumption will apply in the case of a breach of the duty to provide AI training (so-called AI literacy), for example where an inadequately trained employee causes damage; and
  • the presumption of causation is difficult to activate, since to obtain it, the plaintiff would still have to prove the fault of the damaging party and the damage itself.

In any case, while acknowledging the limitations inherent in the proposal, the Study doesn’t propose a presumption of fault that would have disruptive effects on innovation in the Union.

From directive to regulation?

Finally, the Study considers the appropriateness of changing the AILD from a directive to a regulation. This change, which has already been initiated and consolidated in other areas, would ensure uniform application of the rules throughout the EU and avoid the discrepancies that would result from national transposition of the directive. This is particularly true given that the AILD aims for a minimum level of harmonization, leaving room for implementation to the member states, which could then introduce more specific rules. Even in the presence of the AI Act, one would be exposed to possible differences in treatment in terms of civil liability.

Conclusions

The Study underlines the importance of a clear, coherent and effective liability framework for AI to provide operators with a uniform regime throughout the EU and citizens with effective redress in the event of damage caused by AI. In this sense, the observations contained in the Study are an important starting point for the development of the directive, which is currently on hold after having been proposed almost two years ago. If the proposal to adopt a regulation were to be followed, there would be a real risk that it would be withdrawn.

Author: Federico Toscani

 

Data Protection

AI and GDPR: ECJ AG on balancing automated decision disclosure and trade secrets

The recent European Court of Justice (ECJ) Advocate General’s opinion in case C-203/22 is an important development in addressing how companies using AI can balance automated decision transparency with the protection of trade secrets, while complying with the requirements of the GDPR.

The GDPR case on automated decision and its relevance to AI

In the case at hand, an Austrian citizen was denied a mobile phone contract following an automated credit check conducted by a company. The decision was fully automated, with no human intervention. The individual sought to understand how her personal data was processed and the logic behind the automated decision that affected her. However, the company refused to disclose critical details, citing its algorithm as a protected trade secret under Directive (EU) 2016/943.

The CJEU’s involvement drew attention to two key issues:

  • Transparency under GDPR: How much detail about AI-driven decisions do companies have to disclose to data subjects?
  • Protection of trade secrets: Can companies refuse to disclose details of their AI algorithms by invoking trade secret protection?

The opinion of the Advocate General provides important guidance on how these issues intersect and impact the development and deployment of AI technologies.

AI and GDPR: The right to transparency

Under Article 22 of the GDPR, individuals have the right not to be subject to decisions based solely on automated processing, including profiling, where those decisions have legal or significant personal implications. This provision is particularly relevant for AI systems, which often make autonomous decisions without human oversight. In addition, Article 15(1)(h) of the GDPR grants individuals the right to “meaningful information” about the logic behind the automated decision (such as an AI decision) that affected them.

For AI developers, this means that transparency is not optional; individuals must be given enough information to understand how their personal data is processed and how AI-driven decisions are made. The opinion clarified that this doesn’t necessarily mean disclosing all the technical details of an algorithm, but rather providing clear and understandable information about

  • The main factors that influenced the ECJ’s AG opinion.
  • The weight of those factors.
  • The outcome of the decision.

For example, if an AI system evaluates creditworthiness, the company should explain what types of data (such as income or payment history) were used, how those factors were weighted, and how they led to the final decision. This explanation must be accessible and clear enough for the average person to understand.

The role of trade secrets in AI

Many companies using AI view their algorithms as proprietary trade secrets that give them a competitive advantage. The ECJ Advocate General’s opinion recognized the importance of protecting trade secrets, but emphasized that trade secrets cannot be used as an all-encompassing shield to avoid transparency obligations under the GDPR.

Instead, the AG suggested that companies must strike a balance:

  • Companies should provide general explanations of how their AI systems work without disclosing detailed, proprietary algorithms.
  • Regulators or courts can step in to ensure that companies provide sufficient transparency, while protecting intellectual property.

This sets a precedent for AI developers, signalling that while trade secret protection remains important, it cannot override the rights of individuals to understand how AI-driven decisions are made about them.

Implications for AI development and deployment

The CJEU Advocate General’s opinion has significant implications for businesses and industries that rely on AI for decision-making, particularly in areas such as finance, healthcare, insurance, and recruitment, where AI is often used to make decisions with significant personal impact.

Key takeaways:

  • Explainable AI is non-negotiable: Organizations have to make sure their AI systems are not only accurate, but also explainable. Individuals affected by AI decisions have a right to clear explanations, and companies must be prepared to provide them.
  • Balance innovation with compliance: AI developers need to be strategic in protecting their trade secrets, while ensuring compliance with transparency obligations under GDPR. They must focus on a high level of transparency – disclosing enough for individuals to understand decisions, without revealing the inner workings of their proprietary systems.
  • Building trust in AI: This ruling reinforces the idea that transparency is key to building trust in AI systems. Individuals are more likely to trust AI-driven decisions if they can understand how their data is being used and how decisions are being made.
  • Regulatory oversight: The involvement of regulators in the event of disputes is likely to become more common. As AI systems become more complex, courts may increasingly serve as arbiter in balancing transparency and the protection of trade secrets.

The future of AI and privacy

As AI continues to evolve and play a central role in decision making, ensuring compliance with the GDPR will be critical for businesses. The ECJ Advocate General’s opinion in case C-203/22 provides valuable guidance on how companies can achieve this balance. Organizations have to prioritize creating AI systems that aren’t just powerful and efficient, but also transparent, fair, and respectful of individual rights.

This obligation is further amplified by the obligations arising under the EU AI Act that are based on the same principles of transparency and human oversight.

Author: Giulio Coraggio

 

Intellectual Property

WIPO publishes “Top 100 S&T clusters ranking”: China, US and Europe Lead, but emerging economies on the rise

Within the broader framework of the Global Innovation Index (GII) – whose 2024 edition will be published on 26 September – the “Top 100 Science and Technology (S&T) Clusters Ranking”, already released in advance, is of fundamental importance in identifying the most influential economic systems globally. The ranking highlights the geographic areas – known as “clusters” – with the highest concentration of authors and inventors working in the scientific and technological sectors.

Two factors are considered in creating the ranking of the top 100 global clusters: geographical origin of the inventors listed in patent applications filed under the WIPO Patent Cooperation Treaty and the origins of the authors of major scientific articles published during the year. Both criteria are indisputable indicators of innovation.

Analyzing results obtained in 2024, there’s little surprise in the confirmation of the trends observed in the previous year: China (with 26 clusters), US (20 clusters), and Europe (notably Germany, with 8 clusters) dominate the podium. What’s interesting, however, is that greater technological and scientific development in some countries has not always led to a strong position in the ranking. On the contrary, clusters in high-income economies have shown a slower pace compared to those in middle-income economies. The reason for this outcome lies in the pace at which innovation spreads – slow in more developed countries, extremely rapidly in developing ones. So it’s not surprising that among the top 100 clusters, in addition to China, there are seven other middle-income economies, including Egypt, the only African cluster in the science and technology sector.

Finally, the GII provides a snapshot of the type of innovation observed in various clusters. It shows that African clusters are more focused on publishing scientific articles than on patenting activities, a trend also typical of other developing economies. Conversely, in more industrialized countries, innovation advances both through publications and patents.

Author: Noemi Canova

 

Legal Tech

Overview of the LegalTech market evolution in 2024

The international LegalTech market has seen a dramatic increase in investments over the past year, with billions of dollars directed towards developing advanced legal technologies. For instance, Clearbrief secured USD4 million to expand its AI-based legal writing tool, bringing its total funding to nearly USD8 million. Hebbia raised almost USD100 million in a Series B round for its AI-enhanced document search tool. And Hona, supported by Y Combinator, obtained USD9.5 million to tackle communication challenges in consumer-focused legal practices. DeepJudge received USD10.7 million to enhance corporate legal research, and Norm AI raised USD27 million to expand its AI-based compliance platform. Atticus raised GBP5.6 million to improve IPO prospectus verification, and Harvey, a startup adapting OpenAI’s Large Language Models to the legal field, secured USD100 million in a Series C round, achieving a valuation of USD1.5 billion.

Despite this investment surge, the phenomenon of “AI washing” is emerging – where marketing exaggerates the role of AI in products that often only use third-party services without adding substantial value (eg basic GPT wrappers without adequate value added). This highlights the need for legal firms to consult experts in Legal Innovation and Legal Tech to select the most suitable technologies to invest budget wisely in solutions which will likely bring the desired return on investment.

In the latest LegalTech applications, generative AI is applied broadly: from legal research to drafting and reviewing legal documents. These technologies can optimize contract clauses based on historical analysis, improve legal research, and track critical elements in negotiations. Personalization and efficiency in legal services become central, as businesses demand tailored and rapid solutions. Generative AI adoption allows for large data analysis, automation of repetitive tasks, and enhanced client experience.

According to the Italian Legal Tech Report published by Legal Tech Italy and Giuffrè, the LegalTech market in Italy has surpassed EUR30 million, with around 89 companies in the sector, up from 85 the previous year. However, the market remains niche, with many companies still in the project phase and a small percentage of scale-ups. Key barriers include cultural resistance to change and reluctance to adopt new technologies, often due to fears of disrupting established processes. The AI hype will likely accelerate LegalTech adoption and attract new capital.

The number and scope of conferences held in Italy focused on Legal Tech are expanding, exemplified by the Legal Tech Island event in Palermo in June 2024. This growth underscores the increasing interest in the LegalTech sector, fostering idea exchange and nurturing an innovative community. During the event, I delivered a speech on the productization of legal services. The enthusiasm and engagement from attendees highlighted that many lawyers and legal counsels are actively seeking practical methods to integrate AI into their daily practices.

In this sense, we’re advising several in-house teams on developing their AI implementation strategies for the legal function. We help them identify their specific needs, select appropriate AI tools, and create detailed roadmaps for smooth integration. Additionally, we provide training to ensure staff effectively use the new technology and establish monitoring processes to optimize performance. This support will be crucial for law firms as they assist their clients in navigating their innovation journeys and embracing technological advancements.

Author: Tommaso Ricci

 

Life Sciences

Regional Administrative Tribunal suspends the Cannabis Decree

On 10 September 2024, the Regional Administrative Tribunal (TAR) issued an order suspending the effectiveness of the Decree issued by the Ministry of Health on 27 June 2024 (Decree) relating to the marketing of cannabis (CDB). The Decree provided for the inclusion of compositions for oral administration of cannabidiol obtained from cannabis extracts in Section B of the Table on medicinal products of the consolidated laws regulating narcotics and psychotropic substances (Presidential Decree 309/1990).

This decision of the Regional Administrative Tribunal comes at the end of a long and complex process, which had already led to the adoption of other decrees on the matter, which in turn were suspended and finally repealed.

Preliminary injunction

The judges granted the precautionary request submitted by several companies, acknowledging the risk of economic and financial damage that the immediate application of the Decree could cause. The introduction of the Decree would have jeopardized the entire supply chain, from the production to the sale of CBD-based products, creating legal uncertainties and potential criminal liabilities for industry operators.

By deciding to maintain the status quo until the merits hearing, the Regional Administrative Tribunal adopted a precautionary measure to prevent the destabilization of the CBD market. The merits hearing has been scheduled for 16 December 2024.

In the meantime, businesses can continue their operations without having to comply with the provisions of the Decree, although legal uncertainty remains a concern that could affect their short-term decisions.

The cannabis and the legislative process

As indicated in the newsletter of 18 July 2024, CBD is a cannabinoid found in the Cannabis sativa L. plant, known for its lack of psychoactive effects compared to tetrahydrocannabinol (THC). This feature has led to increasing interest in its use, resulting in its inclusion in many products. In 2017, the World Health Organization stated that CBD does not pose a significant risk of addiction or harm to health, encouraging its therapeutic use and trade in several countries. In 2020, in case C-633/18, the Court of Justice of the European Union further ruled that, based on available scientific evidence, CBD has no psychoactive effects and is not harmful to human health.

Despite this, the legislative path for regulating CBD in Italy has been complicated and marked by numerous measures and revisions, culminating in the issuance of the Decree. Published in the Official Gazette on 6 July 2024, the Decree marked the conclusion of a long regulatory process that began in 2020, repealing ministerial decrees of 1 October 2020, 28 October 2020, and 7 August 2023.

The first of these decrees, dated 1 October 2020, updated the tables of narcotic and psychotropic substances by adding CBD to section B of the table on medicinal products, making it subject to medical prescription. However, the entry into force of the decree was suspended on 28 October 2020 to allow the National Institute of Health and the National Health Council to investigate further. Following these investigations, the Ministry of Health lifted the suspension, issuing a new decree on 7 August 2023, which confirmed the inclusion of CBD as a narcotic substance. This decision was immediately challenged by the trade association “Imprenditori Canapa Italia”, which raised concerns about the completeness of scientific opinions and the lack of clarity regarding the effects of CBD, especially with respect to allowable concentrations. Following the appeal filed by the association, the Regional Administrative Tribunal suspended the effectiveness of the decree, citing gaps in the investigation and the absence of a clear assessment of the risks of addiction associated with CBD use.

In response, the Ministry of Health reopened the investigation and requested further scientific opinions, which confirmed the need to include CBD in the table of medicines to protect public health. Based on these opinions, the Ministry of Health issued the Decree. However, the Regional Administrative Tribunal’s recent suspension of the Decree’s effectiveness has reignited the debate, once again making the future regulatory framework for CBD in Italy uncertain. The final decision – which could later move to a higher level of judicial review before the Council of State – will be crucial in establishing a definitive regulatory framework for the use of this substance.

Authors: Nicola Landolfi, Nadia Feola


Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo BardelliCarolina BattistellaCarlotta BusaniGiorgia Carneri, Noemi CanovaMaria Rita CormaciCamila CrisciCristina CriscuoliTamara D’AngeliChiara D’OnofrioFederico Maria Di VizioEnila EleziNadia FeolaLaura GastaldiVincenzo GiuffréNicola LandolfiGiacomo LusardiValentina MazzaLara MastrangeloMaria Chiara MeneghettiDeborah ParacchiniMaria Vittoria PessinaTommaso RicciRebecca RossiRoxana SmeriaMassimiliano TiberioFederico Toscani, Giulia Zappaterra.

Articles concerning Telecommunications are curated by Massimo D’AndreaFlaminia Perna and Matilde Losa.

For further information on the topics covered, please contact the partners Giulio CoraggioMarco de MorpurgoGualtiero DragottiAlessandro FerrariRoberto ValentiElena VareseAlessandro Boso CarettaGinevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer”, the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as a report analyzing key legal issues arising from the metaverse qui, and a comparative guide to regulations on lootboxes here.

 

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.