|

Add a bookmark to get started

18 de abril de 20235 minute read

Canada outlines proposed regulation of AI systems in companion paper to the Artificial Intelligence ‎and Data Act

As discussed in an earlier article, Canada has introduced proposed legislation regulating artificial intelligence systems, the Artificial Intelligence and Data Act (AIDA), as part of Bill C-27. A recent companion paper outlines some of the government’s plans, including the consultation process that will lead to supporting regulations and, eventually, the legislation’s coming-into-force no sooner than 2025. This paper fills in some, but not all, of the gaps and uncertainty raised in the proposed AIDA, which is drafted to leave much of its details to be worked out later.

The companion paper states that the AIDA aims to set the foundation for responsible design, development and deployment of AI systems that impact Canadians’ lives, specifically in relation to ‘high-impact’ artificial intelligence systems. The types of systems that will be considered 'high impact' and the obligations imposed on their operators are to be detailed in not-yet-published regulations. According to the companion paper, factors to be considered when determining if an AI system is 'high impact' include risk of harm to health and safety, potential adverse impact on human rights, severity of potential harm, scale of use, harms that have already taken place, the extent to which opt-outs are available, potential impact on vulnerable people and the degree that risks are already regulated under other laws.

The companion paper highlights four types of artificial intelligence systems as being most likely to be regulated because of their potential impact:

  • Screening systems impacting access to services or employment;
  • Biometric systems used for identification and inference;
  • Systems that can influence human behaviour at scale; and
  • Systems critical to health and safety.

The companion paper highlights risks of harm to individuals and the potential for bias using such systems, including by artificial intelligence systems relying on improper factors when making decisions, such as by producing biased outputs resulting in adverse differential impact based on any of the prohibited grounds for discrimination recognized by the Canadian Human Rights Act, which may result from express reliance on such grounds but also reliance on factors that may correlate with prohibited grounds. The companion paper offered the following example:

… individual income often correlates with the prohibited grounds, such as race and gender, but income is also relevant to decisions or recommendations related to credit. The challenge, in this instance, is to ensure that a system does not use proxies for race or gender as indicators of creditworthiness. For example, if the system amplifies the underlying correlation or produces unfair results for specific individuals based on the prohibited grounds, this would not be considered justified.

The AIDA would impose significant identification, assessment, record keeping, mitigation and compliance obligations on the designers and operators of 'high impact' artificial intelligence systems. The companion paper outlines the principles of human oversight in the operation of the systems, transparency to the public on how the system is being used, and fairness/equity to identify, assess and mitigate the potential harms of high-impact systems, including discriminatory results. Safety, including from potential misuse, accountability and validity, is also a factor that could be considered in the assessment.

While the companion paper outlines that the regulatory impositions would be proportionate to the risk based on international standards, the details are not spelled out.

In terms of compliance, the companion paper suggestions that in the first years after the law comes into force, the focus would be on education and assisting businesses to become compliant. Enforcement through administrative monetary penalties (which we discussed in our earlier paper could be significant - including criminal penalties up to 25 million or 5 percent of global revenues), regulatory offences and criminal offences would come later.

The companion paper also recognizes that multiple parties are often involved in designing, developing, deploying and using AI systems. For example, it mentions that contributors to general purpose open source AI software would not be regulated—although entities which deploy "fully-functioning" high impact systems would be regulated. Similarly, entities that use a system developed by someone else would have to comply with the applicable requirements for managing that system, given the risks and limitations documented by the provider.

It is unclear where this would leave ‘general purpose’ artificial intelligence systems such as generative AI systems, but the companion paper states that the AIDA is designed to be flexible and adaptive as a framework after many consultations and, given the rapid pace of AI development that is being quickly adopted, it is not inconceivable that future iterations will more closely touch upon these systems. The companion paper does note that some AI systems generate or manipulate content, affect the expression of opinions or beliefs, involve personal or sensitive data, or interact with individuals. These activities could make those AI systems ‘high impact’ systems based on their intended uses (such as generation, content moderation or ranking) or for more nefarious uses like misinformation, deception or fraud.

The consultations outlined by the companion paper will take several years, and the AIDA will therefore not come into force until 2025 at the earliest (and could take even longer). This seems like an incredibly long time given how quickly AI is being developed and used in many aspects of society. We will continue to monitor these consultations on proposed regulations closely, particularly on the definition of ‘high-impact’ systems, as well as the developments on regulation of artificial intelligence systems in other jurisdictions in this fast moving area of the law.
Print