Innovation Law Insights
20 March 2025Podcast
Gambling Laws of the World – Italy and The Netherlands
We’ve released the first episode of DLA Piper’s Gambling Laws of the World podcast where Giulio Coraggio, Richard van Schaik and Vincenzo Giuffrè tackle the latest legal issues relating to the online gambling sector in Italy and the Netherlands. Listen to the episode here.
Artificial Intelligence
AI in Legal Processes: Building effective benchmarks to guide the selection and implementation process
Recent developments in legal AI have generated considerable excitement in the industry, but the real challenge remains how to objectively evaluate and effectively implement these tools in legal workflows. While general benchmarking is valuable, effective AI implementation requires a more nuanced understanding of how these tools fit into overall legal processes.
AI as an element of the process
A frequent error is to perceive AI as a direct replacement for human activities. In reality, AI represents merely one aspect of a broader process, with its true value evident in the enhanced outcomes achievable through re-engineered processes. For instance, when examining the creation of case histories in the legal sector, various models of integration can be discerned:
- Traditional process: entirely led by lawyers
- AI-centric: AI generates the history, then reviewed by a lawyer
- Advanced supervision: the lawyer must know the documents to effectively verify
- Human-centric with AI support: the lawyer remains in control, with AI providing support and verification
Choosing the right model depends on your specific context. A complex case where the lawyer must have a deep understanding of the facts may require a human-centric approach with AI support. Conversely, a simpler issue where the facts are already known could benefit from a more AI-centric approach.
Build custom internal benchmarks
AI solution provider demos and public benchmarks rarely reflect the day-to-day challenges legal teams face in integrating tools into their day-to-day processes. To effectively evaluate AI, you need to develop internal benchmarks that test performance in specific workflows. A structured benchmarking process should:
- Identify key legal activities:
- Define key use cases for your practice.
- Establish measurable success criteria (accuracy, speed, consistency).
- Build a representative dataset:
- Collect real legal documents in different formats (Word, PDF, scanned documents).
- Include standard contracts, regulatory documents, and complex agreements.
- Annotate the dataset with legal experts to create a reference standard.
- Define specific evaluation metrics, to be adapted on a case-by-case basis, for example:
- Accuracy in Clause Extraction.
- Consistency and reliability of results.
- Contextual understanding and legal reasoning.
- Efficient processing of complex documents.
- Test visual data processing:
- Evaluate OCR recognition for scanned documents.
- Check the correlation between text and visual elements (maps, annotations).
The role of accuracy in the process
The accuracy required varies significantly by context. If you're using AI to prepare documents to be presented in court (eg a defence document) or published on a public website (eg terms and conditions), even minor inaccuracies in citations can compromise the credibility of the entire document.
In some contexts, the value of AI increases proportionally to its accuracy only after reaching a minimum threshold. Below this threshold, the time it takes to verify and correct the output may exceed the time it takes to complete the task manually. In other contexts, after a certain level of accuracy, the added value may stabilize.
Practical example: Benchmark for rental contract analysis
Let's consider a practical case: an in-house legal department that operates in several European jurisdictions (Italy, France, Germany) and regularly analyses commercial leases. An effective benchmark should test how an AI tool:
- extracts key clauses:
- withdrawal clauses
- fee review mechanisms
- maintenance responsibilities
- dispute resolution
- handles jurisdictional differences:
- specificity of Italian law (eg dry coupon, pre-emption)
- significant differences in French and German law
- application of EU law
- integrates visual data:
- associating contractual obligations with cadastral plans
- interpretation of technical documentation of real estate
A well-structured benchmark should encompass a variety of documents in different formats, including scanned, redacted and annotated documents. The results should then be compared to a standard created by legal experts to accurately assess performance against expected results. We think an optimal solution is one obtained through various iterations, adapting it to specific needs.
Beyond accuracy: Additional elements to evaluate
An effective benchmarking process must consider numerous factors beyond simple accuracy:
- Speed: How crucial is it to get results quickly?
- Qualitative relevance: What level of quality is actually necessary for the specific context of use?
- Economic efficiency: What cost-benefit ratio is sustainable for the organization?
- Transparency: To what extent do conclusions need to be traceable and justifiable?
- Methodological consistency: How important is it for a task to be performed consistently every time?
- Knowledge extraction: Can AI identify and catalogue information that a professional would not have time to systematically record?
- Human supervision: At what point in the process is it possible to provide an efficient supervision activity that doesn’t compromise the time saved to the professional?
Conclusions
AI alone doesn’t provide value – it only generates it in the context of a well-designed process. To truly understand the value of AI in the legal world, we must:
- understand what we want to get out of the process;
- find the best way to integrate AI; and
- compare the AI-Enhanced Process to the Traditional Method.
By building rigorous internal benchmarks that test AI performance in your specific workflows and jurisdictional contexts, you can adopt AI tools with greater confidence, ensuring they improve efficiency without compromising legal accuracy or risk management.
In our team we’ve developed a specific benchmarking methodology for legal teams dealing with intellectual property and technology matters. Contact us to find out more.
Author: Tommaso Ricci
AI Act v GDPR conflict in fixing algorithmic bias?
A potential AI Act v GDPR conflict might arise from the tension between the AI Act’s goal of addressing algorithmic bias and the GDPR’s strict privacy protections, making it difficult for companies to process sensitive personal data for that purpose while remaining compliant.
The AI Act explicitly allows companies to process sensitive personal data – such as race or health data – to detect and correct algorithmic bias in high-risk AI systems like hiring tools, credit scoring, or facial recognition. But, while the AI Act aims for flexibility, existing GDPR regulations introduce significant complexity, creating a challenging legal puzzle for businesses.
AI Act v GDPR potential conflict: Where’s the legal clash?
Legal basis for data processing
Under GDPR Article 9, processing sensitive personal data is generally prohibited unless specific legal grounds are met:
- Explicit consent from the individual.
- Public interest as defined by EU or national law.
However, the AI Act’s Article 10(5) allows more flexibility by permitting the processing of sensitive data without explicit consent to prevent or correct algorithmic bias, provided strong safeguards are in place. This flexibility directly conflicts with GDPR’s more stringent conditions, intensifying the AI GDPR conflict.
Necessity and proportionality
- GDPR perspective: Data processing is permissible only when absolutely necessary, and companies have to rule out less invasive methods first.
- AI Act perspective: Requires “strict necessity,” yet in practice, this may enable broader data use when addressing algorithmic bias. The ambiguity here risks clashing with GDPR’s stricter privacy standards, further deepening the AI GDPR conflict.
An alternative view: Is there really an AI GDPR conflict?
Some experts argue that the perceived tension between the AI Act and GDPR may be exaggerated. They contend that GDPR Articles 6 and 9 are not in contradiction but rather work together: Article 6 defines the legal bases for processing all personal data, while Article 9(2) provides specific exemptions for processing sensitive data. This means that companies handling special categories of data for AI bias correction must satisfy both provisions – ensuring a legal basis under Article 6 and an exemption under Article 9(2) – along with the AI Act’s Article 10(5) requirements.
From this perspective, the challenge isn’t a fundamental AI Act v GDPR conflict but rather the complexity of compliance. The AI Act explicitly states in Recital 70 that bias mitigation may fall under GDPR’s “substantial public interest” legal ground. Instead of conflicting with GDPR, the AI Act provides a structured approach to handling sensitive data in high-risk AI systems, albeit with a high compliance burden on companies.
The AI GDPR conflict: A real legal puzzle for companies
Businesses striving for fair and unbiased AI systems now find themselves caught between two critical regulatory frameworks:
- Ensuring AI fairness by correcting algorithmic bias, potentially requiring broad processing of sensitive data.
- Adhering strictly to GDPR, which could restrict their ability to handle sensitive data without explicit individual consent.
Given the risks associated with inconsistent regulatory enforcement across the EU, companies urgently need clear guidelines or legal updates. Without clarity, differing interpretations may lead to uneven practices – and potentially costly compliance issues, fuelling further AI GDPR conflict.
The path forward: Resolving the AI GDPR conflict
The AI Act’s intent is commendable – addressing algorithmic bias is essential. But its approach must be harmonized with GDPR to avoid legal uncertainty. Until policymakers provide explicit guidance on navigating these complexities, companies must carefully balance AI fairness objectives with strict data privacy compliance.
Ultimately, clear, consistent interpretation and application of these laws across the EU will be essential for unlocking AI’s full potential without compromising fundamental rights while minimizing the AI GDPR conflict. This scenario is only one of the legal areas where companies are likely to face challenges in dealing with the provisions of the AI Act. Hopefully, authorities will provide clarifications.
Author: Giulio Coraggio
Data Protection and Cybersecurity
Italian law on adapting to the DORA Regulation: Analysis and implications
On 12 March the Italian Legislative Decree 10 March 2025, no. 23 was published in the Official Gazette, setting out the provisions for adapting Italian law to Regulation (EU) 2022/2554 (better known as DORA).
The main objective is to ensure that the Italian legal system is fully aligned with the provisions of DORA and to implement the measures delegated to member states.
We’ve analysed the main points of the decree and assessed the impact it may have on the DORA adaptation process of each financial entity.
DORA competent authorities and oversight powers in Italy
Firstly, the Decree clearly identifies the competent authorities responsible for ensuring the implementation of DORA at the national level: the Bank of Italy, Consob, IVASS, and COVIP.
In line with the provisions of the regulation, the text provides that the competent authorities will have appropriate regulatory and supervisory powers to perform their control and monitoring functions over financial entities.
However, there appears to be a partial misalignment with DORA’s provisions regarding oversight powers. According to Article 8 of the decree, the competent authorities will have inspection and audit powers not only over financial entities but also over TIC service providers supporting critical or important functions of financial entities.
This provision seems to diverge from Article 42 of DORA. This allows competent authorities to exercise oversight powers over TIC service providers designated as critical. The concept of critical TIC service providers doesn’t necessarily overlap with that of TIC service providers supporting critical or important functions. Critical providers are designated by the European Supervisory Authorities based on specific criteria outlined in the regulation, following a designation process that also includes the possibility of a dialogue with the provider. In contrast, providers supporting critical or important functions are identified by financial entities based on their internal policies. This identification depends on the type of service provided to each financial entity and is exclusively the responsibility of the entity receiving the service.
A provider might be identified as supporting essential or important functions by one financial entity but not by another entity to which it provides services.
So it’s not entirely clear how the provision in Article 8 of the decree fits into this regulatory framework. It seems that identifying a provider as supporting critical or important functions by a financial entity would trigger the competent authority’s ability to exercise broad audit and inspection rights directly over the provider (who, unlike the financial entity, shouldn’t be subject to oversight by the authority).
A partial mitigation of this misalignment can be found in Article 30(3) of DORA, which still requires that financial entities include audit rights in their contracts with providers supporting critical or important functions, for the benefit of the oversight authorities.
Notification of major ICT incidents
Regarding ICT incidents, the decree confirms the competent authorities as the correct recipients for notifications of major ICT incidents, and it also confirms the notification timelines required by the relevant technical standards.
Furthermore, the decree specifies that if an entity is supervised by more than one competent authority, it will be the responsibility of the first authority to share the incident report received from the financial entity with the other authorities.
Sanctions
Lastly, the decree introduces a specific sanctioning regime for non-compliance with the DORA Regulation.
This is a particularly significant point, as the regulation itself didn’t prescribe specific sanctions, instead leaving their definition entirely to member states.
The sanctioning framework provides penalties that vary according to the severity of the violation and the type of entity involved, with a particular focus on violations related to risk management frameworks and measures safeguarding operational continuity.
In line with the European sanctioning strategy, the decree proposes sanctions proportional to the financial entity’s turnover, with upper limits reaching up to 10% of turnover. Additionally, there’s the possibility of applying accessory measures, such as temporary suspension from administrative functions, in the case of particularly serious violations. Moreover, some sanctions, such as those related to failure to report major ICT incidents or non-cooperation during inspections, may affect not only the entities but also the individuals responsible, such as board members.
Overall, this is a rather significant sanctioning framework, which – considering that DORA has already entered into force – requires financial entities to be prepared to demonstrate they’re complying with the regulation and to justify the decisions made.
Author: Edoardo Bardelli
Intellectual Property
EU: Working towards a regulation on new genomic techniques
On 14 March 2025, the Council approved the negotiating mandate for the regulation of new genomic techniques (NGT), marking a significant step towards innovation and sustainability in the agri-food sector.
This agreement, reached after months of technical and political discussions, pays particular attention to managing patents, which, again, are a crucial tool for ensuring transparency and competitiveness in the market.
New genomic techniques represent a technological advancement that gives plants special characteristics, such as increased resistance to diseases and against unfavourable weather conditions.
The news of a few days ago stands in discontinuity with what happened in February 2024, when the European Parliament opposed the legislative proposal of the Commission and the Council on new genomic techniques to amend Regulation (EU) 2017/625 by introducing many amendments to exclude the patentability of NGT plants, their parts and the genetic information contained (known as the “patent ban”).
The proposed regulation distinguishes between two types of NGT plants: category 1 NGT Plants, which are considered as naturally occurring plants and are exempt from the application of the regulations on genetically modified organisms; and category 2 NGT Plants, which are instead subject to the same regulations as genetically modified organisms, including those on labelling, risk assessment and pre-market authorization.
One of the key elements of the negotiating mandate concerns patent management. The charge will be on all companies applying for the registration of a plant or product belonging to the first category to provide evidence of all existing patents and those for which, although not yet granted, applications have already been filed. A publicly accessible database set up by the Commission will also be responsible for listing all NGT 1 plants in the interests of transparency.
To oversee the impact of patent rights on the agri-food chain, an expert committee composed of representatives of the member states will be called upon, according to the Council proposal.
To the same end, one year after the regulation enters into force, the Commission will also have to publish a study that considers the effect of patents on the competitiveness of the sector and how titles affect farmers' access to NGT plants.
It will be interesting at this point to wait for the final text at the outcome of the negotiations with the European Parliament.
Author: Noemi Canova
Life Sciences
The Court of Justice of the European Union rules on the concept of "medicinal product advertising"
On 27 February 2025, the Court of Justice of the European Union (CJEU) issued a ruling in Case C-517/23, clarifying which activities fall under the definition of "medicinal product advertising" as regulated by Directive 2001/83/EC (Medicines Directive).
According to the CJEU, any actions aimed at promoting the prescription, supply, sale, or consumption of medicinal products qualify as advertising, even if they don’t target specific products. However, the Medicines Directive doesn’t apply to advertising campaigns solely intended to influence the choice of pharmacy where medicines are purchased.
The DocMorris case
The case originated in Germany and involved DocMorris, a Dutch pharmacy that had launched various promotional campaigns targeting German customers since 2012. These promotions offered financial incentives for purchasing prescription-only medicines, including:
- discounts and direct payments for prescription medicines;
- cash rewards ranging from EUR2.50 to EUR20; and
- vouchers redeemable for non-prescription medicines and personal care products.
The North Rhine Chamber of Pharmacists challenged these practices, obtaining interim injunctions from the Cologne Court, which initially prohibited these advertising campaigns. During the legal proceedings that followed, the German Federal Court of Justice referred the case to the CJEU, asking for clarification on the following points:
- Does advertising aimed at promoting the purchase of prescription medicines across a pharmacy's entire product range fall within the scope of the Medicines Directive?
- Is a national provision compatible with the Medicines Directive if it:
- prohibits a mail-order pharmacy based in another member state from promoting prescription medicines through cash rewards or percentage discounts on future purchases of other products;
- allows a mail-order pharmacy based in another member state to promote prescription medicines through direct discounts and cash payments?
The CJEU’s findings
- The Medicines Directive doesn’t apply to advertising campaigns promoting the purchase of prescription medicines through discounts or cash rewards. These practices influence only the choice of pharmacy and not the consumption of the medicine itself, as prescribing remains the sole responsibility of physicians.
- However, promotional activities involving vouchers redeemable for non-prescription medicines do fall under the definition of "medicinal product advertising."
Additionally, the CJEU stated that member states can prohibit promotions where the reward amount isn’t disclosed in advance to prevent consumers from overestimating its value.
The CJEU also reaffirmed that medicinal product advertising must promote rational use, avoid exaggerating product benefits, and prevent excessive consumption, particularly for non-prescription medicines. Consequently, national regulations banning promotions through rewards or discounts for non-prescription medicines are legitimate, as they aim to protect public health.
Conclusion
The CJEU’s ruling provides essential guidance for regulating medicinal product advertising across EU member states. By emphasizing the need to balance commercial freedom with public health protection, the judgment offers crucial direction for managing promotional practices in the EU pharmaceutical sector.
Authors: Nicola Landolfi, Nadia Feola
Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo Bardelli, Carolina Battistella, Carlotta Busani, Giorgia Carneri, Noemi Canova, Gabriele Cattaneo, Maria Rita Cormaci, Camila Crisci, Cristina Criscuoli, Tamara D’Angeli, Chiara D’Onofrio, Federico Maria Di Vizio, Nadia Feola, Laura Gastaldi, Vincenzo Giuffré, Nicola Landolfi, Giacomo Lusardi, Valentina Mazza, Lara Mastrangelo, Maria Chiara Meneghetti, Deborah Paracchini, Maria Vittoria Pessina, Marianna Riedo, Tommaso Ricci, Rebecca Rossi, Roxana Smeria, Massimiliano Tiberio, Federico Toscani, Giulia Zappaterra.
Articles concerning Telecommunications are curated by Massimo D’Andrea, Flaminia Perna, Matilde Losa and Arianna Porretti.
For further information on the topics covered, please contact the partners Giulio Coraggio, Marco de Morpurgo, Gualtiero Dragotti, Alessandro Ferrari, Roberto Valenti, Elena Varese, Alessandro Boso Caretta, Ginevra Righini.
Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.
You can learn more about “Transfer,” the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as a report analyzing key legal issues arising from the metaverse qui, and a comparative guide to regulations on lootboxes here.
If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.