17 July 202311 minute read

What to know about foundation models: From emerging regulations to practical advice

The approval of the latest draft of the European Union’s (EU) comprehensive regulatory regime for artificial intelligence (AI) across its member states contains specific provisions that address the proliferation of foundation models – a form of machine learning trained on vast quantities of data and adaptable to multiple tasks – throughout the EU.

The provisions are part of the proposed EU AI Regulation (the AI Act), which aims to address the potential harms various forms and uses of AI may pose. In this alert, we take a quick look at these provisions, and then look at foundation models themselves.

The distinct safeguards around foundation models in the AI Act, alongside primary obligations (such as compliance with EU safety standardization requirements) to which both those who provide and those who deploy these models are subject, highlight the critical role foundation models are anticipated to play moving forward. These new protective measures, such as disclosure requirements when a party is interacting with a foundation model, are intended to increase transparency while empowering Deployers[1] to understand how their rights and safety are impacted.

As part of this Deployer empowerment, it is not sufficient for Providers[2] to ensure they themselves are compliant under the AI Act. Providers of foundation models must be sufficiently transparent to ensure that parties interacting with their AI System[3] can do so in compliance with the obligations set out in the text and that appropriate governance measures are implemented across all phases of the AI System’s lifecycle. One such example of this is the requirement that Providers of foundation models that function as generative AI (GenAI) disclose in sufficient detail a summary of training data that has been used and is subject to copyright protections.

The EU’s approach to regulating foundation models is of particular interest and one with far-reaching implications. The fact that the AI Act has addressed these AI Systems through an additional layer of compliance, rather than through the original “Low-Risk/High-Risk/Unacceptable Risk” methodology, indicates that regulators envisage additional harms that may not always be caught in their original assessment.

The ripples of these actions do not stop at the shores of the member states. As it did in the General Data Protection Regulation, the EU has given its AI Act extraterritorial effect, requiring organizations to comply with its requirements if their activities impact parties within the EU, even if they are not themselves in the EU. The EU’s approach may therefore inspire organizations and regulators to act in harmony in regulating foundation models and AI systems to ensure that trade and business can continue with minimal issues of compatibility.

Introduction to foundation models

Why is the regulation and (effective) governance of foundation models required, and why are regulators across the world so concerned with these models? To answer these questions, it is necessary to understand what exactly it is the regulators are seeking to regulate.

Foundation models have become increasingly common in the field of natural language processing (NLP). However, the concept of foundation models is not limited to NLP tasks and may also be used in the context of other audio/visual processing, such as image segmentation and generation.

Regardless of application, these models are made possible by three key concepts: transfer learning; scale; and self-supervised learning.

Transfer learning

Transfer learning is a process which involves training a model on a different task to help with the actual task and then fine-tuning it with more precise examples. This enables foundation models to leverage knowledge gained from one task to excel in another. By building on pre-existing knowledge, these models can achieve better performance and require less training data for specific tasks.

Scale

While transfer learning is crucial, scale is what gives foundation models their power. Advancements in computing power, training data availability, and research have made it possible to train foundation models at an unprecedented scale. These models are trained on much larger datasets with greater computing power and have millions or even billions of parameters. Their immense size and unique architecture allow them to capture the complexity and nuances of their training data, leading to improved performance and generalization.

Self-supervised learning

What truly sets foundation models apart is their ability to perform self-supervised learning. In contrast to supervised learning, where models rely on human-labelled data, foundation models learn to make predictions based on input data without explicit labels. This is more scalable than transfer learning because it doesn't require a large amount of human effort to label data. In self-supervised learning, the pretraining task is derived automatically from unannotated data, making it richer and potentially more useful than models trained on a more limited label space.

As an example, consider foundation models for NLP, such as the GPT and BERT models, which can produce novel text. The ability to generate coherent and contextually relevant text is what makes foundation models so powerful for downstream tasks. These models can be fine-tuned to perform a wide variety of functions, such as writing a legal memo, with only a few training examples. A properly fine-tuned foundation model can speed up an attorney's work by proficiently drafting a memo for the attorney to review. The capabilities of foundation models for NLP and other tasks are expected to increase with more data, computing power, and advances in research.

Insights from experience with foundation models

This basic understanding of how foundation models work helps us understand why regulators, such as the EU, have distinguished GenAI systems and foundation models, and what this means for organizations across the world in the development, integration, and deployment of their own foundation models.

Foundation models are more complex than traditional AI models, which makes them exponentially harder to test for accuracy and other key metrics. DLA Piper’s attorneys and data scientists are developing new testing methodologies to address this challenge. While working with leading organizations in the development of complex AI Systems, we have learned several key lessons that should be considered during the lifecycle of development of a foundation model.

  • Garbage in, garbage out: This tenet is, of course, as old as computer science itself, and is especially relevant for foundation models. A model is only as good as the data on which it is trained. If the data used to train a model is inaccurate, incomplete, unrepresentative, or biased, the model’s output will be affected. Where organizations are based in jurisdictions that impose accuracy obligations, such as in the United Kingdom, AI Systems that do not use sufficiently accurate data may be subject to regulatory intervention and any relevant penalties. Companies should develop ways to measure and track the quality of training data to mitigate the risk that the model perpetuates inaccuracies.
  • Bias beware: A model that produces biased outputs increases the potential for the organization to come under regulatory scrutiny for potential breaches of equality-focused regulation. Biases can manifest in different forms, such as gender, racial, or cultural biases. If not properly addressed, these biases can perpetuate and amplify societal inequalities when the models are used in real-world applications. This is particularly the case where the foundation model is used in sensitive scenarios, such as in the provision of treatment to patients in a medical setting or when selecting the best job applicant. It is therefore important that foundation models are thoroughly tested for bias prior to and during the deployment of the AI System.
  • Don’t believe everything you see: Foundation models sometimes produce false or misleading outputs, known as “hallucinations.” Liability may arise for model developers when individuals unwittingly use hallucinated outputs as an element in their decision-making process, leading to harm. It is essential that foundation models be continuously tested to track and reduce hallucinations prior to and during deployment.
  • Create and follow the paper trail: The capabilities of foundation models are double-edged - if not properly monitored and understood, they have the potential to cause harm to individuals. One way of limiting inadvertent or unexplainable harms is by creating comprehensive documentation throughout the AI System’s life cycle. Documentation may be used to create a better understanding of the model and the training data used as well as to work backwards from outputs to determine where an error may have occurred. Documentation may also assist non-technical members of the organizations, such as legal counsel, to track actions that may open the organization up to legal or commercial risks. Depending on the jurisdiction involved, such as the EU, this step may become mandatory for organizations as part of their transparency obligations, and it is therefore best practice to begin this process early in the development stage.
  • Does not compute: It is important to recognize that foundation models require substantial computational resources for training and deployment. Model training, for example, often requires specialized hardware and substantial time and energy resources. The resources required can become cost centers for an organization that may not balance against the overall benefit a foundation model may offer. Organizations should be aware of this to determine if a bespoke foundation model may suit their business needs or if a white label or API solution may be more appropriate.
Key takeaways and action steps

In the face of a rapidly developing AI landscape, it is crucial for organizations, regulators, and society to engage in a collaborative manner in creating a system of governance for foundation models that fosters innovation while protecting the rights of individuals.

The EU’s AI Act represents an opening gambit towards developing an effective mechanism to enable organizations to begin working towards implementing AI Systems in a responsible and ethical manner. The clear focus on distinguishing foundation models from other AI Systems demonstrates an astute assessment of the potential opportunities and risks that they bring. Increased access to powerful models with substantial capabilities to train themselves on near-limitless quantities of data can be used for the benefit – or to the detriment – of society.

Organizations should ensure that foundation models are used with appropriate levels of internal governance to mitigate many of the potential challenges that could arise, including issues with bad training data, potential for biased outputs that risk regulatory scrutiny, and significant organizational expenditures.

To get ahead, we recommend that organizations consider the following:

  1. Embrace transparency: Ensure operational transparency, particularly in data analytics and artificial intelligence. This necessitates clear communication regarding algorithmic processes, data provenance, and potential systematic biases.
  2. Stay informed about regulatory changes: Given the EU AI Act's transparency alterations, organizations must diligently track these changes and verify regulatory compliance. They should preemptively adjust their practices to meet the Act's transparency stipulations and modify their policies as necessary.
  3. Strive to meet benchmarked industry compliance standards: By observing Providers' copyrighted training data disclosure practices, organizations can ensure that their governance and controls align with current standards, benchmarked against peers in their industry and the latest regulatory and legal developments. This approach not only ensures legal fulfillment but also enhances reputation and strengthens stakeholder trust.
  4. Engage with stakeholders: Proactively collaborate with stakeholders beyond Providers, including academic institutions and civil society organizations. Integrating these entities into dialogues and decision-making enables the acquisition of diverse insights, which more accurately reflect public interests. Such engagement underscores the organization's dedication to transparency and accountability.
  5. Enhance documentation and communication: Enhance narrative clarity and documentation quality for end-user developers to expedite their understanding and usage of the provided models. Robust, lucid documentation advances transparency and streamlines compliance endeavors.
  6. Anticipate legal clarifications: Closely monitor the evolution of legal provisions around copyright, particularly as they relate to training processes and generative model outputs. Engage with legal experts to ensure full alignment with any interpretations or determinations issued by legislative bodies, regulators, or judicial institutions.

For more information on AI Systems, foundation models, and the emerging legal and regulatory standards, visit DLA Piper’s Focus page on Artificial Intelligence.

DLA Piper is at the forefront of working with organizations to navigate these uncharted waters towards their goal of developing effective foundation models and mitigating their legal risks. We continuously monitor updates and developments arising in AI and its impacts on industry across the world. For further information or if you have any questions, please contact any of the authors or your usual DLA Piper contact.

 

[1] Per the AI Act, a Deployer is any natural or legal person, public authority, agency or other body using an AI System under its authority except where the AI System is used in the course of a personal non-professional activity.

[2] Per the AI Act, a Provider is a natural or legal person, public authority, agency or other body that develops an AI System or that has an AI System developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge.

[3] Per the AI Act, an AI System is a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions, that influence physical or virtual environments.

Print