Add a bookmark to get started

25 October 202427 minute read

Innovation Law Insights

25 October 2024
Podcast

The state of the global online gambling market with Marco Trucco of Videoslots

In this episode of the “Diritto al Digitale” podcast, host Giulio Coraggio sits down with Marco Trucco, Chief Marketing Officer at Videoslots, to explore the evolving landscape of the online gambling industry. Listen here.

 

Artificial Intelligence

Has your organization implemented an AI governance model?

It's become increasingly clear that the intersection of AI and governance is pivotal for organizations looking to use the power of AI while mitigating associated risks.

The rapid evolution of AI, coupled with stringent regulatory frameworks such as the EU AI Act, necessitates a structured and comprehensive approach to AI governance.

1. AI strategy and core principles

Effective AI governance begins with a clearly defined strategy set by senior leadership. This top-down approach ensures that AI use aligns with the company’s broader vision, focusing on core principles such as ethical usage, trust, and compliance with regulatory standards. Legal and risk management teams are then tasked with developing policies, controls, and frameworks to operationalize this strategy.

2. AI internal stakeholders and committees

To execute AI governance at the tactical level, organizations have to establish dedicated AI governance committees. These committees, often comprising legal, IT, compliance, data, and cybersecurity experts, should be responsible for overseeing AI-related risks. Reporting to the senior management, this body plays a crucial role in policy approvals, vendor management, and integrating AI into existing risk structures. At the moment, this solution is preferable to appointing a single AI Officer, who might not have all the competencies to address AI compliance.

3. Identifying use cases under EU rules

A fundamental aspect of AI governance is identifying which AI use cases fall under regulatory scrutiny. Given the broad legal definitions in the EU AI Act, even seemingly benign systems might be classified as AI. Organizations have to carefully assess their AI systems, especially those that cross borders, as they might still fall under the purview of EU regulations.

4. Risk identification and categorization

Once AI use cases are mapped, organizations have to categorize them based on risk levels – whether prohibited, high risk, or general-purpose AI. A proactive approach is essential, as risks could range from reputational damage to legal exposure, particularly in contexts like HR and credit checks, which the EU AI Act may deem high risk.

5. Implementing controls

For each identified use case, controls should be put in place to mitigate risks. These could include human oversight, bias assessments, and robust technical measures to secure systems. High-risk AI systems must comply with statutory requirements, and organizations should also focus on vendor protections through contracts to ensure compliance across the board.

Finally, given the ever-changing nature of AI and the law, governance processes must be continuously updated. Committees should stay informed of legal and technological developments, ensuring that previously approved systems remain compliant as they evolve. The organizations that invest in solid AI governance stand to gain the most from AI’s capabilities, enjoying a measurable return on investment.

For organizations looking to integrate AI into their operations, a proactive approach to governance is no longer optional – it’s essential. By understanding and implementing a strong governance framework, companies can mitigate risks and position themselves to fully benefit from the opportunities AI presents.

For more on this topic, read the October issue of our AI law journal and the presentation of our AI compliance tool.

Author: Giulio Coraggio

 

Data Protection and Cybersecurity

AI and privacy: The DPAs' view on children and AI and trustworthy AI

From 9 to 11 October 2024, the fourth edition of the G7 Data Protection Authorities (DPA) Roundtable took place in Rome. Among the key issues discussed was AI and its impact on privacy, particularly in relation to building trustworthy AI systems and protecting children in the context of AI technologies.

The Roundtable was hosted by the Italian DPA. The event brought together privacy regulators from Canada, France, Germany, Japan, the UK, the US, and representatives from the European Data Protection Board (EDPB) and the European Data Protection Supervisor (EDPS). Below we outline the main points.

Trustworthy AI

One of the central themes of the event was trustworthy AI. The DPAs published a statement on trustworthy AI, in which they acknowledged that AI technologies are being deployed across all sectors of society, presenting multiple opportunities. However, these same technologies also pose significant challenges, particularly in terms of privacy, data protection, and other fundamental rights.

In their statement on trustworthy AI, the DPAs expressed concern about the potential harms posed by AI, especially in cases where personal data is processed. They noted that many AI systems, including those using generative AI models, depend on vast amounts of data, which can lead to risks such as stereotyping, bias, and discrimination, even if they're not directly using personal data. These issues can, in turn, affect larger societal trends, particularly in the form of deep fakes or disinformation.

The DPAs also underscored that it's critical to embed data protection principles into AI systems from the outset, applying the principle of privacy by design. This means that AI technologies should be built with data protection considerations in mind, ensuring that privacy is safeguarded at every stage of development and use.

AI and children

Another major focus of the Roundtable was the protection of children in the digital age, particularly in relation to AI-driven tools. The DPAs published a statement on AI and children in which they recognized that while AI offers significant opportunities for children and young people, these technologies can also expose them to heightened risks due to their developmental stage and limited understanding of digital privacy.

Several key issues related to AI and children were identified:

  • AI-based decision-making: the complexity and lack of transparency in AI systems can make it difficult for children and their caregivers to understand how decisions are made, especially when these decisions have significant implications. Without adequate transparency, there is a risk of unintentional discrimination or bias, especially when children are involved in AI-based decision-making processes.
  • Manipulation and deception: AI tools can be used to subtly influence users, pushing them to make decisions that may not be in their best interest. This can be particularly dangerous for children, who may struggle to recognize manipulative content. AI-powered technologies, such as virtual companions or toys, could lead children to form emotional connections with machines, potentially causing them to share sensitive information or make decisions that expose them to risks. Specific examples include:
    • AI in toys and AI companions: children may develop emotional bonds with AI-enhanced toys or online companions, making them more vulnerable and lead them to disclose sensitive personal information or to be otherwise manipulated.
    • Deep fakes: young people are particularly at risk of being targeted by deep-fake content, which can include inappropriate or even harmful imagery of themselves.
  • Training AI models: AI models often require large datasets to function effectively. The use of children’s personal data to train these models, including when data is scraped from publicly available sources, raises concerns about privacy violations and long-term harm.

In light of these risks, the DPAs issued a series of recommendations to mitigate potential harms to children and ensure that AI systems respect their rights:

  • AI technologies should be guided by the “privacy by design” principle.
  • AI systems should include mechanisms to prevent online addiction, manipulation, and discrimination, especially when these systems are likely to affect children.
  • Children must be protected from harmful commercial exploitation through AI.
  • AI models affecting children should prioritize their best interests, both in terms of data collection and processing, and in the system’s outputs.
  • Data Protection Impact Assessments (DPIA) should be conducted to evaluate risks associated with AI systems involving children.
  • AI models should respect the transparency principle and provide explainable results, allowing young users and their caregivers to make informed decisions about how their data is used.

Conclusions

It's no surprise that DPAs around the world are increasingly focusing on AI, given the profound implications it has for privacy. The G7 DPA Roundtable reinforced the critical link between AI and privacy, emphasizing that as AI technologies rapidly advance and become deeply woven into the fabric of everyday life, the need to prioritize privacy safeguards becomes more urgent. Companies developing AI tools should embed privacy and data protection principles into the heart of AI development to ensure the ethical, transparent, and fair use of these technologies across society.

Author: Roxana Smeria

The one-stop-shop mechanism in the NIS2 Directive: Guidance for companies on identifying the main establishment

Under the NIS2 Directive, entities that fall within its scope have to register on the Italian National Cybersecurity Agency (ACN) portal. They have to provide the information specified in the Italian Legislative Decree No. 138/2024. And some digital service providers might have to indicate the so-called “main establishment.”

In this article, we'll consider what a main establishment is with regard to digital service providers that also operate outside the national territory. They could be the recipients of the so-called “one-stop-shop” mechanism, aimed at streamlining applicable jurisdiction issues for companies.

What is “one-stop-shop” under NIS2 Directive

One of the most relevant themes NIS 2 Directive is the “one-stop-shop” mechanism. This mechanism provides that companies who may benefit from it will be subject to the exclusive jurisdiction of the EU member state in which they provide their services or, for specific categories that we will see below, in which they have their “main establishment.”

Who can benefit from the “one-stop-shop”?

According to Article 5 of the Decree, the mechanism applies to specific categories of digital service providers characterized by the cross-border nature of their services, including:

  • providers of public electronic communications networks or publicly available electronic communications services, which are deemed to be under the jurisdiction of the member state in which they provide their services;
  • public administration bodies, which are subject to the jurisdiction of the member state that established them;
  • DNS domain name system service providers, top-level domain name registries, entities providing domain name registration services, cloud computing service providers, data centre service providers, content delivery network providers, managed service providers, managed security service providers, as well as providers of online marketplaces, online search engines or social networking service platforms, which are instead subject to the jurisdiction of the member state where they have their main establishment in the EU.

So if your company falls into the latter category, it's crucial to correctly identify your “ main establishment” in the EU.

How to identify the main establishment?

The NIS2 Directive provides for different ways of determining the main establishment. Specifically, it's considered to be the main establishment in the EU:

  • that of the member state in which decisions on IT security risk management measures are predominantly taken;
  • if it's not possible to determine the member state in which such decisions are taken or if they're not taken in the EU, the principal establishment is deemed to be the one located in the member state in which the IT security operations are carried out;
  • if this is also not possible, that of the member state in which the person concerned has the establishment with the largest number of employees in the EU.

If the entities referred to above aren't established in the EU territory but offer services within it, they have to designate a representative in the EU. The representative must be established in one of the member states where those services are offered and will be subject to that state's jurisdiction.

Challenges in identifying the main establishment

Many companies may find it difficult to define their main establishment with certainty according to the criteria just described, especially when cybersecurity decisions are decentralised and spread across several locations in the EU.

Pending specific clarifications by the competent national authorities, since the one-stop-shop mechanism in the context of digital services is also present in other EU regulatory instruments (such as the GDPR and the DSA), it's reasonable to consider the developments of the concept of principal establishment in the context of data protection. In this regard, it may be useful to refer to the guidelines of the European Data Protection Board (EDPB) on the one-stop-shop mechanism in the context of the GDPR.

According to the EDPB, the main establishment should be the place where decisions regarding the purposes and means of data processing are made, with the power to have them implemented. Transferring this concept to the NIS2 context, the main establishment would be the member state where strategic decisions on cyber risk management are made, with the power to impose their implementation.

However, if it's not possible to verify using the above criteria, it will always be possible to apply the more easily determinable criterion of the number of employees. In this scenario, the main establishment is be the member state with the largest number of employees in the EU.

Conclusions

The NIS2 Directive is still being transposed in several EU countries, creating a regulatory landscape that can generate uncertainty for businesses operating at the European or cross-border level. The one-stop-shop mechanism is an opportunity for businesses to simplify their compliance obligations, but they have to carefully analyse and prepare to take advantage of it.

Author: Gabriele Cattaneo

 

Property

Opting out of training AI with copyrighted material is not unlimited

The Hamburg District Court has limited the opt-out by copyright holders to the use of content for AI training.

On 27 September 2024, the District Court of Hamburg issued a significant ruling regarding copyright and the use of AI in a case involving professional photographer Robert Kneschke and the non-profit organization LAION (Large-scale Artificial Intelligence Open Network). Kneschke accused LAION of copyright infringement, asserting that the organization reproduced one of his photographs without authorization to create a dataset for training generative AI systems.

The case concerning the use of copyright-protected content for AI training

LAION developed an open-access dataset for training AI systems, which collects nearly six billion hyperlinks to publicly accessible images, accompanied by their respective textual descriptions. To create the dataset, LAION downloaded images from online archives and used software to verify that the descriptions in the source dataset corresponded to the visual content. Images that didn't match the descriptions were filtered, while those that did were included in the dataset along with relevant metadata, such as URLs and descriptions. To conduct this analysis and verify the text-image correspondence, LAION had to temporarily store the images.

The dispute arose when Kneschke claimed that LAION violated his copyright by using one of his photographs without authorization during the dataset creation process. The image in question was analysed by LAION and subsequently included in its dataset. The photograph was downloaded from the website of a photography agency with which Kneschke collaborated, and it bore the agency's watermark. Furthermore, the agency's terms of service explicitly state that users may not “ use automated programs, applets, bots or the like to access the website or any content thereon for any purpose, including, by way of example only, downloading content, indexing, scraping or caching any content on the website.”

Consequently, Kneschke filed a complaint against LAION, alleging copyright infringement for the unauthorized reproduction of his photograph during the dataset creation. He argued that this reproduction did not fall within the exceptions outlined in Sections 44a, 44b, and 60d of the German Copyright Act (UrhG). In its defence, LAION contended that its actions fell within the scope of the text and data mining (TDM) exception for scientific research purposes, as provided by Article 60d of the UrhG. LAION further asserted that the use of the contested image was only temporary, as it was deleted immediately after analysis and not stored permanently. Additionally, LAION clarified that the created dataset did not contain graphic reproductions of the photographs but merely links to the images available online, so they claimed not to have violated Kneschke's copyright.

The Hamburg court's decision on the use of AI for training purposes

The District Court of Hamburg dismissed Kneschke's complaint and accepted LAION's defences, establishing that the reproduction of images for content analysis and its corresponding textual description should be distinguished from use for training AI systems. According to the German court, LAION's creation of a free dataset falls under the TDM exception for scientific research as outlined in Article 3 of the Copyright Directive and Article 60d of the UrhG.

The text and data mining exception for scientific research purposes

The court held that the reproduction of an image for the creation of a dataset intended for training AI systems qualifies under the text and data mining (TDM) exception for scientific research purposes. It noted that “scientific research generally refers to methodical and systematic pursuit of new knowledge. […] the concept of scientific research does not presuppose any subsequent research success. […] the creation of a data set of the type at issue, which can form the basis for the training of AI systems, can certainly be regarded as scientific research.”

To affirm the absence of commercial purpose, the German court also pointed out that the dataset was made freely and publicly available. The fact that the dataset could be use by for-profit companies for training or further developing their AI systems is irrelevant to the classification of LAION's activities, as research conducted by for-profit entities is still considered research activity, contributing to the advancement of knowledge. The court further clarified that any existing relationships between LAION and commercial companies in the AI sector don't imply that such companies exert significant influence over LAION's activities. Moreover, the court noted that it was not demonstrated that LAION provided privileged access to its research findings to these companies, circumstances that could have hindered the invocation of the exception under Article 60d of the UrhG.

The “opt-out” mechanism to use data for AI training

The court based its decision primarily on Article 60b of the UrhG, considering that LAION's activities fall within the TDM exception for scientific purposes. Consequently, it limited its discussion of the opt-out issue to an obiter dictum. The court stated that expressing an opt-out in simple language, such as plain letters, is sufficient for rights holders to communicate their reservations. Furthermore, it established that the opt-out need not be formulated in a machine-readable format, like robots.txt files, as current technologies, including AI-based systems, should be capable of interpreting human language. So it's sufficient for the reservation to be expressed in a “ machine understandable” format. But the court clarified that this is not a general rule and that each case must be evaluated based on the prevailing technological advancements at the time.

This approach introduces new challenges for businesses in the AI sector: if opt-outs articulated in natural language are considered “ machine readable,” data aggregators will need to deploy AI systems with natural language processing capabilities to identify and interpret such reservations. The court seems to suggest that the burden of error in searching for opt-outs in natural language should be borne by AI enterprises, given the absence of a standard TDM protocol for reservations on the web.

Temporary use of images for training purposes

Regarding LAION's defence that the images were used only “ temporarily,” the Hamburg Court rejected this argument, finding that the reproduction performed by the defendant could not be deemed “ transient” or “ incidental.” The image files were downloaded and analysed intentionally and consciously, indicating that the download process was not merely an ancillary step in the analysis but rather a deliberate and controlled acquisition by LAION. Consequently, the court ruled out LAION's possibility to invoke the exception outlined in Article 44b of the UrhG in this case.

What would have been the outcome of the dispute in Italy?

If the case had been decided in Italy, the decision would probably not have been different from the one adopted by the District Court of Hamburg. Art. 70-ter of the Italian copyright law, implementing Article 3 of the Copyright Directive, allows the extraction of text and data for scientific research purposes. The provision states that research organisations include universities, institutes and other entities with research purposes. This notion doesn't require that scientific research be the only “ statutory” objective of the entity, but it's sufficient that it be the main one: and is therefore compatible with the carrying out of entrepreneurial activities on the side of scientific research.

Article 70-ter stipulates that an entity cannot be considered a “ research organisation” if it's subject to decisive influence from commercial enterprises that grants them preferential access to the results of the research. Such influence is compatible with the status of research organisation for a subsidiary, provided that the preferential access to the results of the research is excluded. So, under Italian law, even commercial companies can qualify as research organisations, if they meet the requirements of Article 70-ter.

In light of the above, an Italian court would have probably considered LAION's activity to be compliant with copyright law, since it's oriented towards scientific research and doesn't pursue commercial purposes.

Conclusion

The Hamburg Court's decision is well-reasoned and reflects the complexities of balancing intellectual property rights with the advancement of AI technology. The court thoroughly evaluated the applicability of various copyright exceptions, ultimately siding with the interests of scientific research. This ruling underscores the challenges that traditional copyright law faces in the age of AI, where mass data collection and analysis are essential for technological development.

But some aspects of the ruling leave questions unanswered, particularly regarding the adequacy of the opt-out mechanism for online content and how the exercise of reservations should be treated within the context of data mining for AI.

Author: Carolina Battistella

New EU “Design Package” on the protection of industrial designs: Evolution and challenges for the design industry.

On 10 October 2024 there was a historic moment for the European design industry. The Council of the European Union approved two key legislative acts under the framework of the “ Design Package,” a reform that brings legal protection of industrial designs into new frontiers.

This long-awaited regulatory evolution, proposed by the European Commission in 2022, responds to a continuously transforming sector, accelerated by emerging technologies such as AI, 3D printing, and the metaverse. The primary objective? To modernize the regulatory framework and adapt it to the challenges of the present and future.

Design isn't merely an aesthetic expression but a strategic economic asset for Europe. Design-intensive industries account for 16% of the EU’s GDP and 14% of total employment, underscoring the critical importance of protecting and fostering creativity and innovation. An effective system for protecting designs not only safeguards businesses but also contributes to the global competitiveness of the EU, especially in an increasingly digitalized environment.

The legislative package comprises two key legal instruments:

  • a directive on the legal protection of designs (a recast of Directive 98/71/EC); and
  • a regulation amending Council Regulation (EC) No 6/2002 on Community designs and repealing Commission Regulation (EC) No 2246/2002.

These new provisions go beyond merely revising the existing legislation: they represent a fundamental transformation aimed at making the system more accessible, flexible, and aligned with new technologies. The main objectives include:

  • simplifying the registration process, making it less costly and more harmonized at the European level;
  • adapting design protection to digital innovations, such as virtual designs in the metaverse and 3D printing, providing users with appropriate tools to defend their rights;
  • ensuring consistency with the trademark system, allowing registered design holders to prevent the importation of products infringing their rights.

The main innovations introduced:

  • Expanded definition of “ design” and “ product”: The concept of design now encompasses not only the external appearance of a product but also movement, transition, and other forms of animation. Additionally, the term “ product” extends to both physical and virtual goods, such as graphical interfaces or stores in the metaverse, paving the way for broader and more versatile protection.
  • Protection against 3D printing: One of the most complex challenges today is unauthorized reproduction through 3D printing. The new legislation clarifies that the infringement of design rights encompasses not only the physical production of a product but also the downloading, copying, sharing, and distribution of digital files that enable reproduction.
  • The “ repair clause”: In a bid to promote competition and the right to repair, components of complex products – such as cars – will no longer be protected if used solely for repair and restoration of the product's original appearance. This measure encourages greater freedom for consumers and enhances the aftermarket for spare parts.
  • Removal of the visibility requirement: The new legislative package abolishes the need for design visibility during product use to qualify for protection. The sole requirement is that the design must be clearly represented at the time of registration, using innovative techniques such as 3D images or videos. Additionally, multiple products can be protected under a single application, further streamlining the process.
  • Protection against counterfeit goods in transit within the EU: Holders of registered designs in the EU will have the authority to prevent third parties from bringing in products from non-EU countries, even if those products are not intended for sale in the European market.
  • Administrative invalidity procedures: Member states will be able to establish an administrative process for declaring the nullity of a registered design at national offices, akin to the existing procedures for national trademarks.
  • Protection of cultural heritage: Member states will have the authority to refuse the registration of designs that reproduce elements of national cultural heritage, such as traditional garments or regional artistic motifs, safeguarding cultural identity.

Following the Council's approval, the package is now waiting for the formal signature of the President of the European Parliament and the President of the Council. It will then be published in the Official Journal of the European Union. The directive will come into force 20 days after publication, with member states given 36 months to implement the necessary measures into national law. The regulation, on the other hand, will also take effect 20 days post-publication and will be directly applicable in member states after four months.

The modernization of European legislation on industrial designs and models marks a crucial step in ensuring a clear and harmonized legal framework that supports innovation and business competitiveness. Streamlining procedures and providing more accessible protection for companies are vital elements for keeping Europe at the forefront of the international design landscape.

Author: Rebecca Rossi

 

Food and Beverage

The Court of Justice of the European Union makes landmark decision on “ meat sounding”

On 4 October 2024, the Court of Justice of the EU (the CJEU) decided case C-438/23 on “meat sounding.” The case concerned the lawfulness of using denominations usually associated with animal products (eg burger, sausage, steak) to designate, market or promote food made from plant proteins or, in any case, other than animal proteins, and the lawfulness of national regulations prohibiting the use of such denominations.

The CJEU had been asked two preliminary questions by the French Conseil d'État, which had been called upon to rule on the legitimacy of a legal provision prohibiting the use of names recalling meat or meat preparations or cuts with reference to vegetarian and/or vegan foods.

The references for a preliminary ruling submitted by the Conseil d'Etat were as follows:

  • Must the provisions of Article 7 of Regulation (EU) No 1169/2011 (the Regulation), which require consumers to be provided with that does not mislead them with as to the identity, nature and quality of the food, be interpreted as expressly harmonising (within the meaning of and for the purposes of the application of Article 38(1) of the Regulation) the matter of the use of names of products of animal origin from the butchery, charcuterie and fish sectors to describe, market or promote foods containing vegetable proteins which may mislead the consumer, thereby preventing a member state from intervening in the matter by adopting national measures regulating or prohibiting the use of such designations?
  • Must the provisions of Article 17 of the Regulation – which provide that the name by which the food is identified is, in the absence of a legal name, its customary name or a descriptive name – in conjunction with the provisions of Annex VI, Part A, paragraph 4 – be interpreted as expressly harmonising, within the meaning and for the purposes of the application of Article 38(1) of the Regulation, the matter of the content and use of names other than legal names, designating foods of animal origin to describe, market or promote foods containing vegetable proteins, even in the case of total substitution of ingredients of vegetable origin for all the ingredients of animal origin constituting a foodstuff, thereby preventing a member state from intervening in this matter by adopting measures regulating or prohibiting the use of such names?

The CJUE concluded, after reviewing the scope and area of application of the Regulation, that the provisions of the Regulation harmonise the protection of consumers against the risk of being misled by the use of names, other than the legal names, consisting of terms from the butchery, charcuterie and fish sectors to describe, market or promote foods containing vegetable proteins and not of animal origin. Consequently, the provisions of the Regulation preclude an EU member state from adopting national measures regulating or prohibiting the use of such names for the products referred to and at issue in the proceedings.

In addition, the CJEU also stated that the harmonisation resulting from the provisions of the Regulation precludes a member state from adopting a national provision setting a minimum level of vegetable proteins below which the use of names consisting of terms related to the butcher's or fish's trade sectors would be authorised to designate, market or promote foods containing vegetable proteins.

In the light of the above, it can be stated that, within the EU, it's not possible to prohibit the use of names referring to meat, its processing or particular cuts, unless these names are the subject of already existing legal names protected under EU law.

The CJEU, through its ruling, has clarified that any national regulations introduced to prohibit the use of terms traditionally used for products of animal origin to distinguish or promote foods that don't contain animal protein are unlawful, because they don't comply with EU law.

Author: Federico Maria Di Vizio


Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo BardelliCarolina BattistellaCarlotta BusaniGiorgia Carneri, Noemi Canova, Gabriele Cattaneo, Noemi CanovaMaria Rita CormaciCamila CrisciCristina CriscuoliTamara D’AngeliChiara D’OnofrioFederico Maria Di VizioNadia FeolaLaura GastaldiVincenzo GiuffréNicola LandolfiGiacomo LusardiValentina MazzaLara MastrangeloMaria Chiara MeneghettiDeborah ParacchiniMaria Vittoria Pessina, Marianna Riedo,  Tommaso RicciRebecca RossiRoxana SmeriaMassimiliano Tiberio, Federico Toscani,  Federico Toscani, Giulia Zappaterra.

Articles concerning Telecommunications are curated by Massimo D’AndreaFlaminia Perna and Matilde Losa.

For further information on the topics covered, please contact the partners Giulio CoraggioMarco de MorpurgoGualtiero DragottiAlessandro FerrariRoberto ValentiElena VareseAlessandro Boso CarettaGinevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer”, the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as a report analyzing key legal issues arising from the metaverse qui, and a comparative guide to regulations on lootboxes here.

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.

Print