30 July 202412 minute read

AI Act Finalised – Here is what has been agreed

On 2 August 2024, the EU AI Act – the world’s first law to regulate artificially intelligent systems – will come into force.

 

The definition of AI systems

Central concept of the AI Act is a new definition of AI System. Its definition is closely based on the AI definition of the Organisation for Economic Co-operation and Development (OECD) and describes an AI system as a:

“machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.

The main components of the definition are:

  1. systems that are machine based;
  2. it is designed to operate with – varying degrees of – autonomy;
  3. it may adjust after the start of its operation; and
  4. it learns, based on the information received, how to generate outputs.

AI systems that are already covered by European Union harmonisation legislation and AI systems that are used exclusively for military, defence or national security purposes are excluded from the scope of the AI Act. The same applies to AI systems that are intended solely for scientific research, development or purely private and personal use.

AI systems that are released under free and open-source licenses are generally not subject to the provisions of the AI Act, unless they are put on the market or put into service as high-risk AI systems; or as an AI system that falls under the provisions of prohibited AI practices under article 5 or the transparency obligations under article 50.

AI systems released under free and open-source licenses can benefit from an exemption to the transparency obligations when their parameters, including weights, on model architecture and model usage are made publicly available. That said, the free and open source exemptions do not apply if the AI system is designated as General Purpose AI system with systemic risks.

 

Risk classification

The AI Act follows a risk-based approach. The higher the risk associated with an AI system, the stricter the requirements. The definition of risk is “the combination of the probability of an occurrence of harm and the severity of that harm”. Based on such definition, there is a distinction as follows:

 

Prohibited AI practices

Prohibited AI practices (ie AI systems with an unacceptable risk) include the placing on the market, commissioning and use of AI systems that entail:

  • the subliminal influence of individual behavior;
  • the random collection of facial recognition data from the Internet or through CCTV;
  • deriving and recognising the emotions of natural persons in the workplace and in education;
  • the deployment of social scoring; and
  • the biometric processing for the inference of sensitive personal data like sexual orientation or religious beliefs.

Such AI Systems are just banned.

 

High-Risk AI Systems

High-risk AI systems are AI systems that are generally permitted, but subject to strict requirements. Such high-risk AI systems include:

  • AI systems intended to be used as a safety component of a product;
  • AI systems falling under harmonised legislation listed in Annex I of the AI Act;
  • certain AI systems used in the educational sector, in recruiting and employment processes, for credit scoring purposes (except for AI systems used to detect fraud), and risk assessment and pricing concerning natural persons in the case of life and health insurance; and
  • AI systems used for other purposes listed in Annex III of the AI Act.

The classification as a high-risk AI system remains open to the future and flexible due to the Commission’s ability to adapt the catalogue in Annex III of the AI Act by means of delegated acts. A provider of a high-risk AI system is also free to provide sufficient documentation to prove that the AI system does not pose a high risk to the health, safety or fundamental rights of natural persons due to specific characteristics.

Providers of high-risk AI systems are required to establish, implement, document, and maintain a risk management system to identify risks and adopt mitigating actions throughout the whole life-cycle of the AI system, and to perform tests to understand the operation of the system in real-world conditions. If the high-risk AI system involves the training of models with data, it shall be developed based on training, validation, and testing data sets subject to appropriate data governance and management practices set out in the AI Act. Compliance with the requirements applicable to high-risk AI systems must be demonstrated by means of suitable technical documentation, which must be prepared by the provider before the AI system is placed on the market or put into operation and kept up to date on an ongoing basis. Such technical documentation shall at least contain the information in Annex IV of the AI Act. The technical documentation is carried out as part of a self-assessment by the respective provider of the AI system. Correspondingly, the provider is subject to various recording obligations (logging) throughout the entire service life of the AI system.

Finally, providers of high-risk AI systems must carry out a conformity assessment procedure. The main objective of this is to demonstrate that the AI system to be placed on the market fulfils the requirements of the AI Act. Following a successful conformity assessment (by means of internal control or with the involvement of notified bodies), the high-risk AI system must be entered in the EU database to be set up by the Commission. The main purpose of registration is to create transparency for the public. In order to make the conformity of the AI system with the requirements of the AI Act recognisable to the outside world, the provider must then affix a CE mark to the system. High-risk AI systems must also be designed and developed in such a way that their operation is sufficiently transparent and they can be effectively supervised by natural persons for the duration of their use.

 

General Purpose AI models (GPAI)

The AI Act further contains specific provisions for general-purpose AI (GPAI) models. A GPAI model is characterised by the fact that it “displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications.” The AI Act differentiates between GPAIs without systemic risk and GPAIs with systemic risk, with the latter being subject to stricter requirements in line with the risk-based approach of the AI Act. GPAI models with systemic risk exist if they have either been classified as such by the EU Commission or if it has high impact capabilities, which shall be presumed to be the case if the cumulative computing power required for their training, measured in floating point operations (FLOPs), is more than 10^25.

Providers of GPAI models that qualify as having systemic risk after the calculation are subject to special reporting obligations to the EU Commission. In addition, the providers of GPAI models must:

  • prepare and continuously update technical documentation of the model, including the training and test procedure relating to it and the test results, and provide providers who intend to integrate the GPAI model into their systems with all the information and documentation required for this purpose. The technical documentation must also include information on the modalities of the development of the AI system, the tasks that the model is intended to fulfil and the expected energy consumption and – in the case of GPAI models that pose a potential systemic risk – the evaluation strategies applied and adverserial-tests performed (eg red teaming),
  • develop a strategy to comply with applicable copyright provisions at Union level and prepare and make publicly available a sufficiently detailed summary of the content used for the training of the GPAI model in accordance with the template provided by the Artificial Intelligence Office (also ‘AI Office’) established by the European Commission, and
  • in the case of GPAI models with systemic risk, (i) additionally perform a model assessment, (ii) assess and mitigate potential systemic risks at Union level, (iii) track, document and report relevant information on serious incidents and possible remedial actions, and (iv) and ensure applicable cybersecurity requirements.
 
Transparency

All AI systems, regardless of the level of risk, are subject to minimum obligations. They are subject to basic transparency obligations to ensure a minimal level of clarity and understanding across the board, informing individuals that they are interacting with an AI system.

 

Addressees

The AI Act has a transnational effect as it applies to: 

  1. providers placing on the market or putting into service AI systems or placing on the market general-purpose AI models in the Union, irrespective of whether those providers are established or who are located within the Union or in a third country;
  2. deployers of AI systems that have their place of establishment or who are located within the Union;
  3. providers and deployers of AI systems that have their place of establishment or who are located in a third country, where the output produced by the system is used in the Union;
  4. importers and distributors of AI systems;
  5. product manufacturers placing on the market or putting into service an AI system together with their product and under their own name or trademark;
  6. authorised representatives of providers, which are not established in the Union; and
  7. affected persons that are located in the European Union.

The question of which actor takes which role in connection with which AI system is essential for identifying the specific requirements to be complied with. It must be answered on the basis of the catalogue of definitions in conjunction with the other provisions of the AI Act and is not always easy in individual cases. In particular, it cannot be ruled out that actors fulfil the requirements of different roles at the same time and sometimes change a role they have already assumed, so that increased requirements may apply. Particular caution is required in this context in a group structure, eg if a group company initially procures, develops or uses an AI system only for itself, but then other group companies also become interested in the AI system. We will be happy to assist you in examining the respective requirements and roles, especially as prioritisation rules and requirements stipulated by the AI Act along the value chain must also be taken into account in this context.

 

AI Governance

The European Commission has placed the task of ensuring compliance with the AI Act, its requirements and its standardised interpretation in the hands of the AI Office and has given it numerous powers, including the monitoring of GPAI models. To this end, a scientific committee and advisory forum with technical and regulatory expertise is also available to keep pace with technological progress in regulatory terms.

 

Sanctions

The AI Act also establishes a system of penalties that, as has been the case with several recent European regulations, is based on companies’ global turnover or a predetermined amount, whichever is higher.

Following this logic, the AI Act provides for fines of up to EUR35 million or up to 7% of the total worldwide annual turnover of the previous financial year in the event of non-compliance with regulations on prohibited AI practices.

For non-compliance with other requirements or obligations, including violations of GPAI-related requirements, a fine of up to EUR15 million or 3% of the global annual turnover of the previous financial year applies.

If notified bodies or competent authorities provide false, incomplete or misleading information, fines of up to EUR7.5 million or 1% of the global annual turnover of the previous financial year may be imposed.

For small and medium-sized enterprises (SMEs), including start-ups, the lower amount applies – in deviation from the principle that the higher amount is decisive.

 

Timelines

After the AI Act received final approval from the EU Parliament on 13 March 2024, and the EU Council on 21 May 2024, it has now been published in the EU Official Journal on 12 July 2024.

The applicability date of the AI Act follows a precise timeline, with a transition period of six months for the introduction of prohibited AI practices (ie 2 February 2025), one year for GPAI systems (ie 2 August 2025), and two years for the remaining provisions (ie 2 August 2026), except for provisions applicable to devices that are already regulated by other EU harmonisation regulations for which the time limit is 36 months (ie 2 August 2027), such as the pharmaceutical and the medical devices sector.

 

Immediate need for action

Even if periods of between six months and two years initially still sound like topics for the future, on closer inspection, companies do not have much time to prepare for the requirements of the AI Act. The reliable and complete evaluation of who has to deal with which AI, to what extent, for what purposes and in what role, is an interdisciplinary exercise in itself and requires close collaboration between a wide range of stakeholders from different areas, particularly from legal, operations and business, as well as across group companies, service providers and jurisdictions.

This first step is essential to clarify which requirements in connection with the AI system in question are to be complied with and by whom, and is likely to take several weeks, if not months, depending on the size of the group of companies and groups in question. On this basis, applicable legal requirements would then have to be defined and responsibilities and processes established, which would have to be implemented step by step on the basis of the individual requirements. Depending on the type of AI system in question, there are also documentation, reporting, transparency and labelling obligations, which also need to be checked and prepared. Anyone who realises this will realise that 6, 12 or 24 months to implement all of this is not a lengthy period.

Print