Add a bookmark to get started

15 de julho de 20248 minute read

Council of Europe adopts first-ever AI international treaty

The Council of Europe adopted the first legally binding international framework convention aimed at ensuring respect for human rights, democracy, and the rule of law in the use of AI systems in the public and private sectors. The Convention will be open for signature from 5 September 2024, including to non-European countries. It outlines a regulatory framework covering the entire lifecycle of AI systems, from design to decommissioning, addressing risks, and encouraging responsible innovation.

 

Aiming for responsible AI governance

The primary objective of the Convention is to ensure that the potential of AI technologies is exploited responsibly, respecting, protecting, and realizing the international community’s values: human rights, democracy, and the rule of law. AI systems offer unprecedented opportunities but, at the same time, pose risks and dangers such as discrimination, gender inequality, undermining of democratic processes, violation of human dignity or individual autonomy, or even misuse by states for repressive purposes.

 

Scope, definitions, and global approach – the AI lifecycle

The Convention’s provisions focus on the lifecycle of AI systems, considering its different phases, from conception and design to deployment, monitoring, and decommissioning. This concept is also central to the European Regulation on Artificial Intelligence (AI Act), with reference to the obligations of transparency and adoption of a risk management system.

The lifecycle of AI systems according to the OECD

But what does "AI system" mean in the Convention? The Convention defines AI systems based not on the corresponding literal definition in the AI Act but on the one adopted by the OECD on 8 November 2023. The two definitions coincide in substance since they’re based on the same key aspects of AI systems: variable autonomy and adaptability, capacity for inference, and generation of predictions, content, recommendations, or decisions that can influence physical or virtual environments. The choice of the OECD definition is oriented towards the need to strengthen international cooperation on the subject of AI and to facilitate efforts to harmonize its governance at the global level.

The Convention doesn’t aim to regulate all activities in the lifecycle of AI systems but only those that can interfere with human rights, democracy, and the rule of law. So the Council of Europe’s approach is peculiar. Unlike the AI Act, it doesn’t make its objective scope coincide with specific AI models, systems, or practices but instead with the individual activities within the AI lifecycle and the impact they could have, even irrespective of the risk the whole system presents.

The Convention regulates the use of AI systems in both the public and private sectors. Parties are mandated to adopt or maintain appropriate legislative, administrative, or other measures to implement its provisions. These measures are structured to be graduated and differentiated based on the severity and likelihood of occurrence of negative impacts on human rights, democracy, and the rule of law throughout the lifecycle of AI systems.

 

General principles in the AI lifecycle

Following the first two chapters on general provisions and obligations, the third chapter of the Convention establishes a set of general principles to be implemented in accordance with national legal frameworks. These principles are formulated with a high level of generality so they can be applied flexibly in various rapidly changing contexts.

The first principle calls for measures to respect human dignity and individual autonomy. In particular, the use of AI systems shouldn’t lead to the dehumanization of individuals, undermining their ability to act autonomously or reducing them to mere data points. Furthermore, AI systems shouldn’t be anthropomorphized in a way that interferes with human dignity. A person's autonomy is crucial to human dignity, like the ability to self-determine, make decisions without coercion, and live freely. In AI, preserving individual autonomy means guaranteeing people control over the use and impact of AI technologies without compromising their free choice. The principle of anthropocentricity also permeates the AI Act (which, among its objectives, aims “to promote the dissemination of an anthropocentric and reliable artificial intelligence”) and the Italian bill on AI that the Italian Parliament is now reviewing.

The second principle of the Convention focuses on the transparency and supervision of AI systems. This principle is also particularly relevant in the AI Act, with reference to high-risk and other AI systems. AI systems' inherent complexity and opacity necessitate robust supervision. AI systems' decision-making processes and overall functioning should be clear and accessible to all stakeholders. The Convention mandates adopting or maintaining measures to ensure transparency and monitoring tailored to specific contexts and risks, including identifying AI-generated content.

When it comes to transparency, the aspects of explainability and interpretability are of utmost importance. The former necessitates clear explanations as to why an AI system provides certain information and produces specific predictions, content, recommendations, or decisions, particularly in sensitive areas such as healthcare, financial services, immigration, border services, and criminal justice. The latter refers to the ability to understand how an AI system makes predictions or decisions, that is the extent to which the output generation process can be made accessible and understandable to non-experts in the field. But it's crucial to acknowledge that information disclosure could potentially conflict with privacy, confidentiality and trade secrets, national security, and the rights of third parties. So a fair balance should be struck in implementing the principle of transparency, taking into account all these factors.

Supervision, a crucial element in the ethical use of AI systems, refers to the various mechanisms and processes that monitor and guide their lifecycle activities. These mechanisms can take the form of legal, policy, and regulatory frameworks, recommendations, guidelines, codes of conduct, audits and certification programs, error detection tools, or the involvement of supervisory authorities. The Convention recognizes the importance of these mechanisms in ensuring the responsible development and deployment of AI systems.

Accountability and responsibility, a cornerstone principle of the Convention, is a vital aspect in the ethical use of AI systems. It necessitates the establishment of mechanisms to hold organisations, entities, and individuals involved in the lifecycle of AI systems accountable for any negative impacts on human rights, democracy, or the rule of law. This principle is closely intertwined with transparency and supervision, as their mechanisms enable a clearer understanding of how AI systems work and how they produce their outputs, facilitating the exercise of accountability.

The Convention goes on to focus on four other equally important principles: equality and non-discrimination (it lists several normative references to be considered and the various biases that might characterize AI systems), protection of personal data, reliability based on technical standards and measures in terms of robustness, accuracy, data integrity and cybersecurity, and secure innovation in controlled environments (eg regulatory sandboxes).

 

Remedies, procedural safeguards, and risk management: Possible moratorium for AI systems

As regards remedies, the Convention requires parties to apply their existing regulatory regimes to activities in the AI system lifecycle. For these remedies to be effective, it provides for the adoption or maintenance of specific measures aimed at documenting and making certain information available to people concerned and ensuring the effective possibility of complaining to the competent authorities.

Transparency and user awareness are key in the interaction with AI systems. The Convention underscores this by requiring that those interacting with AI systems be informed precisely that they are interacting with an AI system and not a human being. This emphasizes the necessity of transparency and user awareness in the AI system interaction.

There’s also a provision concerning the need to identify, assess, prevent, and mitigate ex-ante and iteratively (where necessary) potential risks and impacts to human rights, democracy, and the rule of law throughout the AI system lifecycle. Parties have to develop a risk management system based on concrete and objective criteria. The Convention also requires parties to assess the need for a moratorium, bans, or other appropriate measures about AI systems that are incompatible with respect for human rights, democracy, and the rule of law. So parties are free to define the concept of incompatibility and the scenarios requiring such measures.

 

Implementation, effects, and entry into force of the Convention

Implementing the Convention requires due consideration of the specific needs and vulnerabilities of people with disabilities and children and the promotion of digital education for all population segments.

Parties are free to apply previous agreements or treaties relating to the lifecycle of AI systems covered by the Convention, but they must adhere to the Convention’s goals and objectives and not assume conflict obligations.

As of 5 September 2024, the Convention will be open for signature not only by the Member States of the Council of Europe but also by third countries that contributed to its drafting, including Argentina, Australia, Canada, Japan, Israel, the Vatican City State, and the US, as well as EU Member States. Once in force, other non-member states might be invited to join. The Convention will enter into force on the first day of the month following the expiration of a period of three months from the data in which at least five signatories, including a minimum of three Member States of the Council of Europe, have expressed their consent to be bound.

Print