|

Add a bookmark to get started

2 de enero de 20244 minute read

European Medicines Agency publishes five-year AI workplan

On December 18, 2023, the European Medicines Agency (EMA) and the Heads of Medicines Agencies (HMA) published their 2023-2028 Multi-Annual AI Workplan. The Workplan aims to help the European Medicines Regulatory Network (EMRN), a coordinated network of national competent authorities within European Economic Area Member States working alongside the EMA and the European Commission, to embrace artificial intelligence (AI) for internal regulatory use and create a proposed approach to regulatory developments that will harness AI opportunities in the medical field.

Potential benefits include increases to productivity, automation of repetitive and time-consuming processes, and the ability to digest and utilize much larger sets of data in decision-making.

The Workplan

EMA recognizes that health regulatory bodies, like regulated industry, are increasingly using and developing AI-powered tools. The Workplan sets out a “collaborative and coordinated strategy” over the next five years to maximize the benefits of AI in medicines regulation, while carefully targeting and mitigating associated risks. The Workplan was prepared under the direction of the HMA-EMA Big Data Steering Group and was adopted by the EMA’s management board during its December 2023 meeting.

The Workplan concentrates on four primary areas:

Guidance, policy, and product support

EMA is expected to provide ongoing support in development and evaluation of AI within the medicine lifecycle, building on EMA’s AI reflection paper and the comments received during the consultation period between July 2023 and December 2023.

From mid-2024, focus will also be given to preparing for the implementation of the EU's AI Act, which is expected to come into force in 2025/2026. Many versions of the AI used in the context of medicine are likely to fall within the definition of High-Risk AI Systems and uses, requiring compliance with additional obligations, including certification and transparency requirements. Guidance on how stakeholders can approach this process is therefore likely to be a key point of deliberation as the Workplan progresses.

AI tools and technology

The Workplan indicates that AI tools and technology will be key in internal and external regulatory protocols and developments. Implementation of large language models (LLMs), particularly chatbots, are singled out as tools with potential to assist regulators and are highlighted as an area of focus during the implementation and monitoring of AI for benefits to office productivity.

The Workplan also intends to roll out knowledge mining roadmaps to the EMRN, and the development of several policy initiatives targeted at collaborative development of AI tools within the EMRN, near the end of 2024.

Collaboration and training

Over the course of the Workplan’s timeframe, a key focus is the continuation of collaborative efforts between the EMRN members, the EU, and other international partners and stakeholders in order to keep ahead of the evolving field of AI. This will include the creation of special-interest area working groups on AI and collaboration on training efforts aimed at expanding European understanding of AI and data analytics.

The Workplan additionally includes provisions for the expansions of the Digital Academy through EU-Network Training Centre, offering additional support and development opportunities for delivery of data-science focused curriculum and subject-specific masterclasses (such as hackathons designed to upskill EMA staff), details of which are to be explored with stakeholders as the Workplan progresses.

Experimentation

The Workplan acknowledges the fundamental role of experimentation in accelerating learning and gaining new insights concerning AI, which will help reduce uncertainty surrounding AI. The Workplan states that experimentation cycles of up to six months at a time will be conducted in the coming years.

During this time, internal EMA guiding principles for responsible AI will also be defined through the support of the European Specialized Expert Community. Once these have been consolidated, a roadmap of research priorities will be developed, and experimentation cycles will be aligned and prioritized as applicable.

Accompanying these cycles will also be several technical deep dives that investigate specific tools and techniques (eg, digital twins), which are proposed to guide experimentation by helping to ensure that a structured approach is taken across the network.

What comes next?

The EMA understands that AI continues to move quickly and that expertise from the areas of academia, industry, and public policy will be required. This is particularly true as we move toward the enactment of the EU AI Act.

As a result, EMA intends to actively engage with, and involve, stakeholders throughout implementation of the Workplan, and we can expect EMA to regularly update the Workplan to keep pace with developments in AI technology, policy, and ethics.

Find out more

For more information on AI and the emerging legal and regulatory standards, visit DLA Piper’s focus page on AI.

Gain insights and perspectives that will help shape your AI Strategy through our newly released AI Chatroom Series.

For further information or if you have any questions, please contact any of the authors of this alert.

Print