|

Add a bookmark to get started

10 de julio de 202213 minute read

Coming EU legislation will change the AI regulatory environment for healthcare technology and life sciences companies

The proposed EU AI Regulation, a comprehensive regime for the purposes of regulating AI in the EU, is set to impact organizations around the world. It is expected that the proposed regulation will be enacted in 2024 or 2025.  Once that happens, those that make AI available within the EU, use AI within the EU, or whose outputs from AI affect people in the EU will become subject to the regulations, wherever they are based, and such organizations must therefore consider the extraterritorial reach of the regulations and how they will affect the way business is carried out.

In the highly competitive market of pharmaceuticals and life sciences, the ability to lawfully deploy complex technology for the purposes of competitive advantages is core to the strategy of almost all organizations.

Many players in the fields of drug development and discovery have sought to level up their processes by incorporating AI, advanced disease and molecular modelling, and other rapidly evolving computational technologies into their development and production cycle. By doing so, manufacturers and suppliers can speed up traditional innovation and generate novel insights for new therapies. The result can be the mitigation of some of the typical pitfalls encountered in the life sciences sector – among them, slow times to market, low success rates, path-dependent thinking, and the overall costs of development.

For these organizations, it is not simply out with the old and in with the new. Many are now seeking to use complex technologies, such as AI, to review their stock of available compounds to treat diseases outside the drug’s original purview, in the same way that some previously existing drugs are now being used to mitigate the symptoms and severity of COVID-related illnesses.

Alongside the augmentation of more traditional uses in the life sciences sector, we are also seeing a rapid expansion of new uses for complex technologies throughout the industry, blurring traditional lines between the life sciences and healthcare sectors. It is estimated that the value of health-tech companies, such as those focusing on providing carebots and wearable health devices, has risen from roughly $8 billion in 2016 to around $44 billion in 2022.

These technologies offer the life sciences sector an opportunity to address many of the strains the healthcare industry is facing today, such as long lag times to see a doctor and delayed diagnoses of complex and rare diseases.

Regardless of the potential benefits of such technologies, however, developers, manufactures, suppliers, and even end users will need to attend carefully to the EU’s coming AI Regulation to avoid being subjected to substantial fines and penalties.

MAN, MACHINE, AND EVERYTHING IN BETWEEN: DEFINING AI

Many commentators attempting to pin down exactly what AI is and how it should be regulated find themselves wandering in a confusing, shadowy thicket. Gathering several complex technologies into one broad working definition means only that the definition will be outpaced, and soon, by newer technologies.

The current version of the EU’s proposed AI Regulation offers a comprehensive working definition that organizations should strive to understand.

The updated version of the text, released in November 2021, defines AI as:

“[a system that] receives machine and/or human-based data and inputs, infers how to achieve a given set of human-defined objectives using learning, reasoning or modelling implemented with the techniques and approaches listed in Annex I, and generates outputs in the form of content (generative AI systems), predictions, recommendations or decisions, which influence the environments it interacts with.”

The techniques and approaches referred to in Annex 1 include:

  1. Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning

  2. Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems and

  3. Statistical approaches, Bayesian estimation, search and optimization methods.

While such a wide scope helps immunize the regulation from future developments, it creates a broad class of technologies that are to be considered “AI” and that are therefore subject to the regulation.

Indeed, the definition means that many types of software could be caught: not only autonomous AI but also systems whose decision path is clearly predetermined or understood and which are therefore not truly autonomous. Add to this the far-reaching and extra-territorial scope of the regulation and the potential fines for breaching its requirements, and things could get costly.

THE ROBO-DOCTOR WILL SEE YOU NOW

Health tech offers a prime example of how the regulation’s broad classification reaches many technologies not always considered AI. Broadly speaking, health tech is technology developed for the purposes of improving aspects of the healthcare system or a person’s overall health – including technology which supports infrastructure (such as teledoctors or electronic health records); personal technology (such as health trackers); and medical devices (such as many of the novel blood-sugar trackers for diabetic patients).

Many of the above examples could be interpreted as falling within the regulation’s definition of AI.

In the most basic example, a smart health tracker (such as a fitness band worn on the wrist) collects information from the user and generates inferences about the data, allowing the user to view summaries of these behaviors, whether it be steps taken or hours slept. By doing this, the AI has taken in data, processed it by using a number of the techniques indicated in the regulation, and produced an output that the user has requested.

Despite a lack of complexity when compared to a device like a chemical modelling machine, for example, even relatively traditional tools may still fall within the definition of AI for the purposes of the regulation. Notably, even a traditional electronic health record, which may feed inputs into search functions to infer standard-of-care next steps, could trigger the proposed definition.

You may find yourself asking Why does the regulation need to regulate my fitness tracker? From the EU’s perspective, even a simple device can harvest fruitful information and deploy it far-ranging predictions and impacts. Data gathered to infer steps and speed may also include your personal locational data – allowing someone to track such factors as where you are likely to be at a given time, the places you visit frequently, and locations where you are likely to interact with other users.  Data collected through innocuous apps like step counters or sleep logs may be used by companies to create health profiles of their users – which may then be sold on to other organizations to calculate your insurance premiums or target you with advertisements.

The ability of even simpler AI tools to become part of something larger has prompted EU regulators to cast a wide net. And that over-inclusiveness will, necessarily, entangle innovations of all stripes. In addition, because the scope of technology often creeps as needs, ideas, and managers change, the regulatory risk of a particular AI innovation depends not just on what the tool is, but what it may become.

Much like the General Data Protection Regulation, such a precautionary regulatory approach reminds us that while AI offers a number of benefits and solutions to problems within the life sciences sector, it must be appropriately implemented and governed to stay on the good side of the EU’s expected regulation. And while the reality could be that regulators may ultimately select very discrete targets for enforcement, the fact remains that the scope of the regulation would empower broad enforcement.  It would be up to the regulators to determine how far to go.

WHO SHOULD CARE?

The simple answer: every company in the life sciences supply chain. The regulation would require organizations throughout the supply chain to adhere to the obligations it sets out, and it includes a number of mandates that they must follow. The consequences of failure to do so can be substantial, with fines reaching up to €30 million or 6 percent of global turnover (whichever is higher), which could be in addition to overlapping breaches such as the 4 percent of global turnover under the General Date Protection Regulation.

WHAT DOES THE FUTURE HOLD FOR EUROPE AND AI IN HEALTH TECHNOLOGY AND LIFE SCIENCES? 

The EU and a number of other prominent jurisdictions clearly intend to begin regulating AI at the organization and consumer levels. Tech companies moving into health, or health and pharmaceutical companies looking to deploy technology, however, all need to know that today’s investment will not be regulated away tomorrow.

Here are answers to some key questions about the implications of the AI regulation.

DOES IT MATTER IF I AM NOT IN THE EU?

No. The aim of this regulation is to protect the rights of EU citizens. The regulation will apply to high-risk AI which is available within the EU, used within the EU or whose output affects people in the EU. It is therefore irrelevant whether or not the provider or user is in the EU. For example, where AI creating data models from patient data is hosted on a server outside of the EU and/or the decisions which the AI makes, or enhances, is an activity carried out outside of the EU, the regime may still apply.

WHAT DO I NEED TO UNDERSTAND?

Providers: The most onerous controls will apply to providers – a person or organization that develops an AI system, or that has an AI system developed, with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.

Many of the controls which will apply to providers will be familiar to those who have been tracking AI developments and proposals more generally: transparency; accuracy, robustness and security; accountability; testing; data governance and management practices; and human oversight. EU-specific aspects include the requirement for providers to self-certify high-risk AI by carrying out conformity assessments and affixing the appropriate proof of conformity markings (such as a CE marking); that high-risk AI must be registered on a new EU database before first use/provision; and that providers have in place an incident reporting system and take corrective action in the event of serious breaches / non-compliance.

Importers, distributers and users: Other participants in the high-risk AI value chain will also be subject to new controls. For example, importers will need to ensure that the provider has carried out the conformity assessment and as drawn up required technical documentation. Users will also now be required to use the AI in accordance with its instructions, monitor it for problems (and flag any to the provider/distributor), and keep logs, among other requirements. It is however noted that personal use is excluded from the user obligations.

WHAT ABOUT ORGANIZATIONAL COMPLIANCE?

A key response to this growing regulatory trend will be to ensure that processes are in place now to cater for the requirements of the regulation and to create a buffer against future regulation of currently gray areas.

Companies should audit their existing systems and their plans for new products and services to check whether anything may be caught under the coming regime. Caution should be given to technologies that stray towards those deemed prohibited under the regulation, such as AI systems that deploy subliminal techniques or those which exploit vulnerabilities of specific categories of persons. Similar prohibitions can be found on devices that use social scoring systems and real-time biometric identification systems.

ASSESSING THE HIGH-RISK DESIGNATION: 5 KEY STEPS

To assess what is caught by the high-risk designation within the regulation, organizations should consider applying a simple methodology:

  1. Understanding – perform an initial audit of current and near-term systems that might be caught by current or future regulation, appraise your organization’s position in the life cycle (buyer, user, providing AI as a service), and obtain clarity about your data holdings

  2. Policies – look at relevant process flows, the need for notices and transparency, policies encompassing regulatory requirements, ethical and SESG-driven policies, and those that are industry-specific

  3. Workforce and consumer aspects – consider risks of bias and discrimination

  4. Ownership preservation – protect your data and AI systems and

  5. Corporate strategy alignment – ensure that AI policies fit with your organization’s general control set of Responsible AI and broader Responsible Governance, such as responsible data retention protocols, ESG policies, and similar internal approaches.

JURISDICTIONS AROUND THE WORLD ARE AIMING TO REGULATE AI

This same concept of self- or independent auditing of AI is rapidly evolving in the US, UK, and other jurisdictions. In the US, the Federal Trade Commission has announced its intent to hold companies using discriminatory algorithms accountable under its existing authority. New proposals are gaining traction in Congress that would formalize such requirements and grant the FTC express jurisdiction to oversee and enforce penalties for violations.

These issues have particular resonance in healthcare and life sciences as recognized “high risk” areas where decisions may drive access to, and effectiveness and safety of, medical diagnoses and therapies.

At the same time, health regulators are focusing on the difference between assistive and autonomous technologies, a trend with implications for both regulatory and common law views on responsibility and the division of labor between human medical professionals, researchers, and machines.

All of this will be evolving rapidly over the next few years. Developments in the EU may offer key insights for what the future holds in the US and other jurisdictions around the world, particularly given the push by some individual states in the US to match European standards.

FIND OUT MORE

For more information on AI and the emerging legal and regulatory standards, contact the authors or your usual DLA Piper contact or find out more at DLA Piper’s AI focus page.  

You can find a more detailed guide on the AI Regulation and what’s in store for AI in Europe in DLA Piper’s AI Regulation Handbook.

To assess your organization’s maturity on its AI journey in (and check where you stand against sector peers) you can use DLA Piper’s AI Scorebox.

Find more on AI, technology and the law at Technology’s Legal Edge, our tech sector blog.  And Coretex is our blog covering life sciences, pharmaceuticals and their intersection with the law and technology.
Print