14 November 202422 minute read

Innovation Law Insights

14 November 2024
Artificial Intelligence

US reinforces AI strategy

Ahead of the 5 November 2024 presidential election, on 24 October the US administration adopted a National Security Memorandum titled “Memorandum on Advancing the United States' Leadership in Artificial Intelligence; Harnessing Artificial Intelligence to Fulfil National Security Objectives; and Fostering the Safety, Security, and Trustworthiness of Artificial Intelligence.” The adoption of the memorandum was mandated by the US Executive Order of 30 October 2023 on the safe, reliable, and accountable development and use of AI.

The memorandum is the most up-to-date document on the US national security strategy on AI. On 24 October the White House also released a complementary document to the memorandum, titled “Framework to Advance AI Governance and Risk Management in National Security.”

Let's briefly explore the key themes of the memorandum and its intended audience, as well as its potential fate in the future.

Frontier AI

First, unlike the 2023 Executive Order, the memorandum primarily focuses on generative AI, a wave that began with the release of OpenAI's ChatGPT in 2022. Generative AI models, such as those powering OpenAI's ChatGPT, Anthropic's Claude or Google's Gemini, are an evolution from previous deep learning models because they're adaptable across a wider range of applications. In contrast, the previous generation of AI, primarily based on supervised learning (often called supervised machine learning), was more tailored to specific applications and, as a result, was generally more predictable and posed lower risks.

Evoking imagery dear to US culture, the memorandum names these models “frontier models,” defining them as “general-purpose AI systems near the cutting-edge of performance, as measured by widely accepted publicly available benchmarks, or similar assessments of reasoning, science, and overall capabilities.” A definition that seems to echo that of the European AI Act, which went into effect last August 24, in which general-purpose AI models, or, in English, “General Purpose Artificial Intelligence,” are AI models characterized by significant generality capable of competently performing a wide range of distinct tasks.

Who the memorandum addresses, key themes and the role of China

The memorandum establishes US national security policies with respect to frontier AI, assigning specific responsibilities to various federal agencies. On the one hand, it highlights that the government should support and nurture the leadership of the US AI industry and, on the other hand, what the government itself should expect from the private sector to succeed in achieving national security goals.

The document sets out a series of measures to ensure that the US maintains its position of primacy in the global AI ecosystem. Essential in this regard are talent attraction and the government's ability to provide appropriate security and protection guidelines to AI developers and users, helping to mitigate the risks that AI systems can pose. According to the memorandum, security and reliability of AI are crucial aspects of accelerating the adoption of AI systems, and the absence of clear guidelines can be an obstacle. In addition, the memorandum designates the “AI Safety Institute” (AISI) at the Department of Commerce as the primary point of contact for AI companies in the governmental sphere in relation to the evaluation and testing activities of AI systems.

Another crucial aspect touched on in the memorandum is the need for large-scale expansion of IT infrastructure and data centres to fuel the growth of the AI industry very quickly.

As for governance, the document charges national security agencies with several tasks. Nearly all agencies will have to designate a “Chief AI Officer” (CAIO) and a National Security Coordination Group for AI will be created composed of the CAIOs of the major agencies. The memorandum also encourages US cooperation with international partners and institutions, such as the G7, OECD and the United Nations, to promote international AI governance.

What about China? It’s never mentioned directly in the memorandum (it’s referred to, generically, as “competitors”), but it’s undoubtedly the main competitor to the US for global AI leadership. Part of the document describes how the US intends to surpass competitors in this race and emphasizes how partners and allies play a central role. On the point, last 28 October, the US administration also published the long-awaited final rule to limit US investment in China. The control regime, which will go into effect on 2 January 2025, will affect all US companies and citizens who invest in Chinese companies operating in the fields of AI development, semiconductors and microelectronics, and quantum information technologies.

The memorandum presents an ambitious and detailed vision of AI’s role in US national security. We’ll see how the new administration will address the issue.

Author: Giacomo Lusardi

 

Data and Cybersecurity

NIS2: Personal liability of directors for lack of compliance is a warning message

The NIS2 Directive has issued a significant warning to companies in the EU: the personal liability of directors for lack of compliance is now a critical issue that cannot be ignored.

The NIS2 Directive became applicable to a massive amount companies that need to notify the competent authorities of their status and adopt measures to ensure compliance. As cyber threats continue to escalate, the NIS2 personal liability directors provision places unprecedented responsibility on top management to ensure robust cybersecurity measures are in place. Companies must treat compliance with this directive as a paramount obligation to safeguard their leadership and operations.

Understanding the personal liability of directors under the NIS2 Directive

The personal liability of directors under the NIS2 Directive represents a major shift in how cybersecurity compliance is enforced. Its Italian implementation states:

“The National NIS Competent Authority may impose on natural persons referred to in paragraph 5 of this article, including administrative and management bodies of essential and important entities as per Article 23, as well as those performing managerial functions at the level of CEO or legal representative of an essential or important entity, the application of the accessory administrative sanction of incapacity to perform managerial functions within the same entity. This temporary suspension is applied until the interested party adopts the necessary measures to remedy the deficiencies or comply with the warnings as per Article 37, paragraphs 6 and 7.”

Key points:

  • Direct Accountability: Directors and high-level managers are personally responsible for ensuring compliance with the NIS 2 Directive.
  • Administrative Sanctions: Non-compliance can lead to personal sanctions, including temporary incapacity to perform managerial roles within the same entity.
  • Conditional Reinstatement: The suspension remains until the director takes corrective actions to address the compliance failures.

Implications of directors' personal liability for lack of compliance

The NIS2 personal liability directors clause has several profound implications:

  • Operational disruption: The incapacitation of key directors can lead to significant operational challenges and strategic setbacks.
  • Reputational damage: Personal sanctions against directors can harm both individual and corporate reputations, affecting stakeholder trust.
  • Legal and financial risks: Companies might face increased legal scrutiny and financial penalties due to directors' non-compliance.

Steps to avoid personal liability under the NIS2 Directive

To mitigate the risk of personal liability for lack of compliance, directors should:

  • prioritize compliance as a paramount obligation: recognize that adhering to the NIS2 Directive is a critical duty requiring immediate attention;
  • implement robust cybersecurity measures: adopt appropriate technical and organizational measures to manage cybersecurity risks effectively;
  • establish clear governance structures: define roles and responsibilities for cybersecurity within the management hierarchy to facilitate accountability;
  • foster a cybersecurity culture: promote awareness and training at all organizational levels to embed cybersecurity into the company's culture;
  • engage regularly with authorities: maintain open communication with national competent authorities for guidance on compliance obligations; and
  • conduct regular audits and assessments: periodically review cybersecurity policies to ensure they meet the evolving standards of the NIS2 Directive.

Why compliance with the NIS2 Directive is a paramount obligation for companies

Given the potential for personal liability of directors under the NIS2 Directive, companies must treat compliance as a paramount obligation:

  • Protecting leadership: Ensuring compliance safeguards directors from personal sanctions, preserving leadership stability.
  • Maintaining operational continuity: Avoiding the incapacitation of key managers prevents operational disruptions.
  • Enhancing corporate reputation: Demonstrating commitment to cybersecurity strengthens stakeholder trust and market positioning.
  • Mitigating legal and financial risks: Compliance reduces the risk of fines, legal actions, and financial losses associated with cyber incidents.

Conclusion

The NIS2 personal liability directors provision serves as a critical warning message, elevating cybersecurity from a technical concern to a fundamental aspect of corporate governance. The personal liability of directors for lack of compliance with the NIS2 Directive underscores the importance of proactive measures and diligent adherence to regulatory requirements. Companies must recognize compliance with this directive as a paramount obligation, taking immediate steps to enhance their cybersecurity stance. By doing so, they protect their directors from personal liability and contribute to a more secure and resilient digital environment.

Watch the recording of a video that we ran (in Italian) on the obligations deriving from the NIS2 Directive here.

Author: Giulio Coraggio

Data Act and Intellectual Property: Challenges and opportunities for data protection

The Data Act, recently introduced by the European Commission, is a crucial step in regulating non-personal data generated by machines and connected devices. This regulatory framework, part of the Digital Decade initiatives, aims to promote access to and sharing of industrial data through a harmonized system. It clarifies who, alongside the data producer or holder, has the right to access data generated by products or related services and under what conditions.

Scope and territorial limits

The Data Act targets a wide range of entities, including manufacturers of connected products, related service providers, and users. However, its applicability is limited to recipients and third parties in the EU, which reduces potential international legal conflicts and focuses on a unified European market.

Impacts of the Data Act on trade secrets

A critical aspect concerns the intersection between the Data Act and the protection of trade secrets. Article 4(8) states that data holders can refuse data access under certain conditions. In particular, they can refuse if:

  • “exceptional circumstances” exist; or
  • the data holder, as a trade secret holder, can demonstrate that disclosing the secrets would highly likely cause serious economic harm, despite the technical and organizational measures adopted by the user to protect them.

This exception must be applied on a case-by-case basis and must be notified to the competent authority. However, the mandatory sharing of data, even when the data includes trade secrets, raises concerns about potential economic loss for businesses. This risk arises from the possibility that, while exceptions exist, disclosure could result in unrecoverable costs or reduce the company's competitive advantage.

Practical issues

The Data Act requires data holders to disclose certain information to users or third parties designated by users, even if such information qualifies as trade secrets. However, as highlighted in Recital 31, this regulation should be interpreted to preserve the protection provided for trade secrets under Directive (EU) 2016/943.

The practical issue arises due to a divergence in approach between the Trade Secret Directive and the Data Act. Directive (EU) 2016/943 adopts an ex post approach, requiring a subsequent assessment to evaluate the actual impact of disclosing trade secrets. In contrast, the Data Act follows an ex ante approach, which requires evaluating potential risks to trade secrets before data sharing. Consequently, trade secret protection alone is insufficient to justify refusing data access, although demonstrating the likelihood of serious economic harm is required to exercise this right to refuse, as indicated in Article 4(8).

Database rights and the Data Act

Another relevant aspect of the Data Act concerns the relationship with database rights, particularly the sui generis right introduced by Directive 96/9/EC. This right is designed to protect databases in which substantial investments have been made to acquire, verify, or present data. But there’s currently legal uncertainty about whether a database containing automatically generated machine data qualifies for this protection.

In many cases, the parties involved tend to assume that the database right applies to this data, so they grant licenses based on the presumed existence of sui generis rights. However, a key question raised by the Data Act is whether the sui generis database right can effectively be invoked to prevent data access and sharing, and whether these rights are still valid when database data includes information from connected devices.

According to Article 43 of the Data Act, the database right doesn’t apply to data obtained from or generated by a connected product or related service within the scope of the Data Act. The aim, as highlighted in Recital 112, is to prevent database rights from becoming a barrier to data access and usage rights. This has a significant consequence: databases containing data from connected products or related services will no longer benefit from sui generis protection.

An unintended side effect could be that organizations may be discouraged from mixing connected device data with other datasets to avoid losing database sui generis rights. Some companies may even consider reverting to manual data collection methods to protect their datasets with broader intellectual property rights, avoiding compromising legal protection by mixing connected device data.

Future prospects and recommendations

By 12 September 2025, the European Commission will publish standardized terms for data sharing and trade secret protection. The main Data Act deadlines are as follows:

  • 12 September 2025: Data Act application
  • 12 September 2026: Design obligations apply to new products and services
  • 12 September 2027: Application of unfair contractual terms provisions to all long-term contracts

Addressing the Data Act and managing intellectual property

For organizations, it’s essential to prepare for the Data Act by adopting concrete data management strategies and safeguarding intellectual property assets.

  • Legal assessment of the Data Act’s impact: Organizations should analyse which data and database rights apply and the impact on trade secrets, mapping the data flow for products and related services.
  • Implement a strong data governance framework: Establish clear policies for data access, sharing, and portability, and implement robust security measures to protect data.
  • Training to understand regulatory obligations: It’s important to raise awareness among employees and external partners about compliance regulations and the importance of protecting IP assets.
  • Effective contract and terms of use management: Review data access and usage terms and update sharing agreements to meet transparency and fairness standards.
  • Continuous monitoring and adaptation: Check for relevant sectoral or national laws, monitor regulatory developments, and observe competitor activities to adjust business strategies.

In conclusion, the Data Act requires companies to adopt specific measures to protect their intellectual property rights, such as trade secrets and database rights, in an environment of increased data access and sharing. Businesses will need to review their data management systems, carefully separating data generated by connected devices to avoid losing essential protections. By adequately preparing, organizations will be able to comply with new regulatory requirements, maintain competitiveness, and take advantage of the opportunities offered by a more open European data market.

Author: Maria Vittoria Pessina

 

Intellectual Property

Major publishing house explicitly excludes its books and reprints from being used for AI training

One of the major publishers worldwide has reportedly added an explicit reference to AI in its newly published books and reprints' copyright pages, stating that no part of these works can used or reproduced in any manner “for the purposes of training artificial intelligence technologies or systems.” This is the first example of a publishing house taking action against the exploitation of published paper and digital works to train AI technologies, including large language models (LLMs).

Copyright notices are used by publishers to assert their rights and those of their authors over printed and digital books. They’re also used to inform the reader of what can and cannot be legitimately done with the work. In any case, where copyright disclaimers are not used, existing copyright protections still apply.

By adding a reference to AI system training in its copyright notice, the publishing house is effectively excluding its works from being used to develop chatbots and other AI digital tools, which has allegedly been done in the past by using published (and pirated) books without the consent and authorization of rightsholders.

Further, in the newly drafted copyright notice to be added to books, the publisher expressly reserves its works from “the text and data mining exception” in accordance with Article 4(3) of Directive (EU) 2019/790 on copyright and related rights in the Digital Single Market (Copyright Directive).

According to Article 2 of the Copyright Directive, text and data mining is “any automated analytical technique aimed at analysing text and data in digital form in order to generate information which includes but is not limited to patterns, trends and correlations.” As set by Article 4 of the Copyright Directive, exceptions or limitations to exclusive rights on copyright-protected works, that are legitimately accessed, can be provided to allow text and data mining activities to be carried out. Paragraph 3 of Article 4 of the Copyright Directive specifies that such exceptions or limitations are applicable only on the condition that rightsholders have not expressly reserved the use of their works in an appropriate manner, such as machine-readable means in the case of content made publicly available online.

The newly drafted copyright notice of one of the “Big Five” publishing houses should work in a similar way to exclusion protocols contained in robots.txt files, which are being used by websites to exclude their content from scraping activities by bots and AI technologies. This can be interpreted as a first step in the publishing industry to adopt clearer and explicit statements by publishers on reserving training, text and data mining rights in relation to their published works.

The decision to rewrite the copyright notice further highlights the ongoing tension between content creators and the AI world. A growing number of publishers, authors, and players in the sector are requesting stronger and more defined protection of their exclusive rights and are taking a defensive approach towards AI technologies and their training and output production in generative uses. For example, it’s been reported that another prominent publisher recently adopted explicit measures prohibiting freelancer collaborators working on its authors’ books from copying any of the information and text contained in books into AI systems and programs for the purposes of editing, checking, extraction, or any other related purpose.

In addition to the measures adopted by publishers, authors and their representative organizations are calling for changes in publishing contracts with appropriate safeguards for creators. Agreements with publishers should ensure that authors' consent is obtained before publishers use or allow the use of the works to train AI systems and, more generally, before granting access to the work to an AI system, for example, to produce AI-generated translations, audiobooks, and cover art of copyright-protected books.

In addition, several representatives of the creative industries have been advocating for the introduction of a more comprehensive legal framework. This framework should provide for adequate and transparent licensing provisions to ensure that creators and rightsholders are adequately paid for the use of their works, including in the context of uses made by AI systems. To this extent, new machine-readable text and data mining licenses are being devised to provide legitimate access, in some instances with payment, to machines that automatically scrape copyright-protected content.

Ultimately, rightsholders, both authors and publishers, are increasingly more interested in retaining meaningful control over how and to what extent their works interact with AI systems while being fairly and rightfully compensated for any exploitation of their works.

Author: Chiara D'Onofrio

 

Technology Media and Telecommunication

Italian Communications Authority releases Communications Monitoring Report for the first semester of 2024

The Italian Communications Authority (AGCOM) has published the Communications Monitoring Report No. 3/2024, containing data referring to the first semester of 2024.

Data reported in the Monitoring Report show that the total number of direct fixed-network accesses as of June 2024 does not show substantial changes from the data recorded in March 2024, standing at 20.24 million lines. On a yearly basis, there was an increase of 7,000 accesses, and compared to the corresponding period in 2020, there was an increase of 2.09% (a total of 414,000 more accesses than in that period).

AGCOM also notes that lines based on copper technologies have decreased by about 170,000 on a quarterly basis. Over the past four years, however, there has been a decrease of 4.95 million accesses.

At the same time, lines based on more advanced technologies have increased. As reported by AGCOM, total broadband lines – estimated at about 19.16 million units as of June 2024 – have increased both on a quarterly and yearly basis, registering an increase of 40,000 and 150,000 units, respectively.

Network accesses in Fiber To The Cabinet (FTTC) technology recorded in June 2024 amounted to 9.5 million, with a year-on-year decrease of 587,000 lines, and thus a decrease of 5.7%, compared to the corresponding month of 2023. Accesses in Fiber To The Home technology (FTTH), amounting to 5.23 million as of June 2024, increased on a quarterly basis by more than 300,000 and by 1.09 million on an annual basis, while compared to June 2020 the increase is 3.71 million lines. Fixed Wireless Access (FWA) lines are increasing too, although to a smaller extent (about 180,000 units on a year-to-year basis). Such lines amounted to 2.25 million accesses at the end of June 2024.

This trend shows a consistent increase in performance in terms of marketed connection speed, as in the period between June 2020 and June 2024, the weight of lines with speeds of 100 Mbit/s or more increased from 46.4% to 75.2% of the total. During June 2020 and June 2024, marketed lines with transmission capacity of 1GB/s or more increased from 7% to 25.2% of the total.

The Monitoring Report's data confirms the rising trend of data consumption. Average daily traffic in terms of total volume in the first half of 2024 increased by 14.8% compared to the corresponding value in 2023 and by 65.8% compared to the corresponding value in 2020. These figures are reflected in daily traffic per broadband line, where unit consumption data increased by 55% compared to 2024, from a value of 6.03 GB per line to 9.34 GB per line on average per day.

With reference to the mobile network segment, AGCOM reports that the total number of active SIMs as of the end of June 2024 (both human, ie “voice-only,” “voice+data” and “data-only” involving human interaction, and M2M, ie “machine-to-machine”) are 108.7 million, up by 537,000 units year-on-year. Notably, M2M SIMs increased by 697,000 year-on-year, reaching 30.1 million units. Human SIMs, amounting to 78.6 million as of June 2024, declined by 160,000 units compared to the corresponding period in 2023. According to data reported by AGCOM, 13.6% of human SIMs as of June 2024 were SIMs for business customers and the remaining 86.4% were SIMs for consumer customers.

According to AGCOM reports, the number of human SIMs that generated data traffic during the first half of 2024 can be estimated at approximately 58.7 million, with an increase of about 2.2 million compared to the corresponding period of 2023. This data shows that daily mobile data traffic recorded as of June 2024 increased by 16.5% compared to the corresponding period of 2023 and by more than 165% compared to 2020. The average daily unit consumption in the first half of the year can be estimated at about 0.84 GB, with an increase of 13% compared to the corresponding period of 2023 and more than 150% compared to 2020, when daily data consumption was estimated at 0.33 GB.

Authors: Massimo D’Andrea, Flaminia Perna, Arianna Porretti


Innovation Law Insights is compiled by DLA Piper lawyers, coordinated by Edoardo BardelliCarolina BattistellaCarlotta BusaniGiorgia Carneri, Noemi Canova, Gabriele Cattaneo, Maria Rita CormaciCamila CrisciCristina CriscuoliTamara D’AngeliChiara D’OnofrioFederico Maria Di VizioNadia FeolaLaura GastaldiVincenzo GiuffréNicola LandolfiGiacomo LusardiValentina MazzaLara MastrangeloMaria Chiara MeneghettiDeborah ParacchiniMaria Vittoria Pessina, Marianna RiedoTommaso RicciRebecca RossiRoxana SmeriaMassimiliano Tiberio, Federico Toscani,  Federico Toscani, Giulia Zappaterra.

Articles concerning Telecommunications are curated by Massimo D’AndreaFlaminia Perna and Matilde Losa.

For further information on the topics covered, please contact the partners Giulio CoraggioMarco de MorpurgoGualtiero DragottiAlessandro FerrariRoberto ValentiElena VareseAlessandro Boso CarettaGinevra Righini.

Learn about Prisca AI Compliance, the legal tech tool developed by DLA Piper to assess the maturity of AI systems against key regulations and technical standards here.

You can learn more about “Transfer”, the legal tech tool developed by DLA Piper to support companies in evaluating data transfers out of the EEA (TIA) here, and check out a DLA Piper publication outlining Gambling regulation here, as well as a report analyzing key legal issues arising from the metaverse qui, and a comparative guide to regulations on lootboxes here.

If you no longer wish to receive Innovation Law Insights or would like to subscribe, please email Silvia Molignani.

Print