Add a bookmark to get started

Abstract_Organic_Shape
11 January 202411 minute read

The role of harmonised standards as tools for AI act compliance

Introduction

Following the political agreement on the EU’s AI Act, the International Standards Organisation (ISO) has published its standard ISO/IEC 42001_2023 – Artificial Intelligence – Management System (ISO/IEC 42001), being the world’s first standard on AI management system (AIMS), on 18 December 2023.

This standard provides guidance for establishing, implementing, maintaining, and continually improving an AIMS within the context of an organisation.

ISO/IEC 42001 is expected to be adopted by the European Standards Organisations CEN CENELEC and therefore gain the status of a harmonised European standard. Conformity with the requirements of ISO/IEC 42001 will operate as evidence of an organisation’s responsibility and accountability regarding its role with respect to AI systems.

We are advising clients across all sectors on their overall AI Governance frameworks in preparation for compliance with the AI Act. In expectation of an agreed text of the AI Act early this year, leveraging existing and emerging standards on AI will be key to getting a head start on operationalising compliance.

 

EU AI Act and the request for standardisation

In line with the approach of the EU’s New Legislative Framework (NLF) for product safety, in its regulation of high-risk AI systems, the draft EU AI Act incorporates the requirement for harmonised standards to be developed against which conformity will be assessed. It sets out specific requirements applicable to standard setting organisations in the course of their work, including relating to the promotion of investment and innovation and creation of legal certainty and competitiveness.

In 2022, the European Commission issued a request to European Standardisation Organisations (ESOs) to begin work on the preparation of such standards to support the development of safe, trustworthy AI. This EU AI Standardisation Request specifically focused on the requirement for standards in respect of the following obligations applicable to high-risk AI: risk management systems, governance and quality of datasets, record keeping, transparency and information provisions for users, human oversight, accuracy specifications, robustness specifications and conformity assessment.

In response to the request, standard-setting bodies began work on developing these standards, giving highest priority to ISO/IEC 42001 reflecting the importance of a management system as a foundational governance tool in respect of high-risk AI.

While there is ongoing debate about the challenges of standard-setting organisations, traditionally focused within the physical and technical realm now being required to consider ethical and social issues like trustworthiness, bias discrimination, and environmental impact,1 ISO has faced this challenge. In total, 22 ISO standards now exist in respect of AI and ISO/IEC 42001 will also be followed with further harmonised standards.2

 

Presumption of Conformity

While ISO/IEC 42001 is a voluntary standard, one of the most obvious incentives for adopting ISO/IEC 42001 and other impending ISO standards is the benefit of presumption of conformity under Article 40 of the current draft EU AI Act assuming it is published adopted by the European Standards Organisations and published in the Official Journal. Article 40 of the draft EU AI Act states that high-risk systems (or GPAI/foundation models depending on the final agreed text) which are in conformity with relevant harmonised standards shall be presumed to be in conformity with the relevant requirements for those systems (e.g. Title III, Chapter 2 (requirements for high-risk AI)).

 

The NSAI and relevance for Ireland

Ireland has played a unique role in the development of ISO/IEC 42001. Project Editor of ISO/IEC 42001 was a national committee member of the National Standards Authority of Ireland (NSAI). The project editor’s role involved maintaining the momentum and continuous progression of the drafts and working with ISO committee managers to advance them to ballot. Project editors also work closely with subject experts and contribute to the texts.

The NSAI is seeking designation as an AI notified body responsible for assessing conformity with standards and implementing certification schemes to support Irish businesses in navigating the AI regulatory landscape. The NSAI has also set out a commitment to creating awareness regarding AI standards and certification to support Irish business and AI system providers and users.3

 

ISO/IEC 42001 – Key Takeaways

ISO/IEC 42001 applies to all organisations (regardless of size) developing, providing, or using any AI-based products or services.

In its substantive provisions, ISO/IEC 42001 describes the processes, considerations and actions required to establish, implement, maintain, and continually improve an AIMS. An AIMS involves the establishment of policies and processes to achieve the organisation’s AI objectives. It describes the end-to-end process involved in developing and implementing a management system.

  • Organisations must understand the context in which they operate and deploy AI, identifying relevant issues which may affect its ability to achieve the intended result of the AI management system. Organisations must also determine and document the scope of the management system including its boundaries and applicability with reference to the relevant issues and needs of interested parties.

  • There are detailed guidelines on the requirements of leadership or ‘top management’ which ensure that organisations cannot approach the implementation of this standard as a ‘box-ticking exercise’ or neglect to implement it appropriately. Senior management should demonstrate leadership and commitment by, for example, ensuring adequate resources are available, assign responsibility for ensuring the system meets its requirements and reporting on performance and establishing an AI policy which meets certain criteria, is linked to clear objectives, and will include a commitment to the continuous improvement of the management system.

  • AI risk assessment processes must be established which will need to identify and appropriately analyse potential risks and their consequences. AI risk treatment processes ensure that appropriate action is taken, or controls are implemented in relation to risks which are identified. A statement of applicability will describe the controls that the organisation has opted to implement, and include justification for the exclusion of other controls. An AI impact assessment should be implemented to identify potential consequences of deployment of an AI system on individuals or society, including regarding foreseeable misuse. Where the intended results of the management systems are not achieved, controls should be reviewed and adjusted. AI risk assessment and AI system impact assessment should be re-performed where there are significant changes to AI systems.

  • As an ongoing obligation, organisations should conduct performance review of the management system and evaluate its effectiveness, have determined the appropriate scope, methodology and timing. An internal audit programme should also be established and maintained. In terms of management oversight, ‘top management’ is expected to regularly review the management system and decide on matters related to any needs for continuous improvement and change. Where the organisation identifies nonconformity, ISO/IEC 42001 requires not only remedial action to correct it and dealing with consequences, but also measures to eliminate the cause of the nonconformity and avoid similar nonconformities.

  • In its Annexes, ISO/IEC 42001 lists the suggested specific and practical detail on the control objectives and controls which can be used when addressing risks relating to the design and operation of AI systems. While the controls are not mandatory and organisations are free to design their own controls, they clearly form the basis of a checklist for organisations seeking to certify under ISO/IEC 42001. This checklist of controls is accompanied by more detailed normative guidance on these controls and their implementation.

  • It also includes guidance on potential AI-related organizational objectives and risk sources and an overview of the use of AI management systems across domains/sectors including guidance on the integration of ISO/IEC 42001 with other relevant standards.
 
Comparison to Other Standards

ISO/IEC 42001 is one example of national and international bodies seeking to establish standards for the safe and effective use of AI. In January 2023, the U.S. National Institute for Standards and Technology released its AI Risk Management Framework.4 Like ISO/IEC 42001, the NIST AI RFM is a voluntary standard, but we anticipate that that it may be incorporated into regulations or standards from US Federal and State regulators. There are similar voluntary standards in Canada,5 the United Kingdom,6 Australia, Japan, and South Korea.7

In October 2023, the Biden administration issued an Executive Order on Safe, Effective, and Trustworthy use of AI.8 This Order instructed more than 20 Federal departments to develop standards and best practices for the use of AI in their respective jurisdictional areas. It also encouraged the independent regulatory agencies to use their enforcement powers to address issues with the use of AI in their areas of authority and to propose any additional regulations they feel are warranted.9 The Federal departments are to finish their work on standards and best practices within the next 6-12 months. It is expected that new guidance and, perhaps, new regulation will follow soon after.

 

Conclusion

According to the draft of the AI Act, standardisation should play a key role to provide technical solutions to providers to ensure compliance with the Act. It is clear from the activity of the European Commission and the standard-setting bodies that legislators and authorities are aware of the complexity involved in compliance with novel legislation in a hyper-evolving area with such far reaching, unknowable consequences as the use of AI.

Pursuant to the draft AI Act, the post-market enforcement powers of national supervisory authorities may be invoked where an AI system presents risks to fundamental rights, health and safety or the environment and does not comply with applicable requirements (including the requirement for providers of high-risk AI systems to implement a risk management system). These include powers to order corrective action, prohibition, restriction, or withdrawal from the market.

In light of the proliferation of AI regulation and guidelines around the globe and the potential for inconsistency of interpretation and enforcement, as well as the presumption of conformity concept established by Article 40, adoption of standards should be viewed as an important enabler of organisations in their compliance activities and consistently demonstrating to global markets and stakeholders that their products and services based on or incorporating AI are trustworthy.

While, as mentioned above, numerous AI-related standards have been and will be further developed, ISO/IEC 42001 will, for organisations responsible for high-risk AI, represent a valuable tool for operationalising compliance with the AI Act and ensuring they benefit from the presumption of conformity provided under the AI Act.

Organizations also have a unique opportunity to help shape standards still in development by engaging with the entities developing them. These entities are seeking input from stakeholders on these standards, and bringing real-world experience and industry-specific perspectives can prove valuable in how the standards are developed. If you’d like more information on how to engage in this stakeholder process, please reach out to us.

DLA Piper’s AI and Data Analytics group helps clients in many industries and jurisdictions establish AI governance programs and prepare for compliance with developing standards, including the monitoring and testing of AI systems for unintended effects.

To find out more on AI and AI laws and regulations, visit DLA Piper’s Focus on Artificial Intelligence page and Technology’s Legal Edge blog.

If your organisation is deploying AI solutions, you can undertake a free maturity risk assessment using our AI Scorebox tool.

If you would like to discuss any of the issues discussed in this article, get in touch with Mark Rasdale (Partner), Bennett Borden (Partner and Chief Data Scientist), Claire O’Brien (Senior Associate) or your usual DLA Piper contact.

Print