Canada introduces legislation to regulate artificial intelligence systems
As part of its re-introduced privacy legislation Bill C-27 (see our previous article about the predecessor privacy bill C-11 from 2020), the government has proposed a new Artificial Intelligence and Data Act (“AIDA”). AIDA, if it becomes law, would be the first broad based regulation of artificial intelligence systems in Canada. It would address potential privacy and bias harms of artificial intelligence based systems, particularly for ‘high-impact systems’, by implementing certain safeguards, and penalties for improper or reckless use. At the same time, it could have unintended application to a broader range of cloud or neural network-based technologies that need to be carefully considered.
The types of systems implicated by AIDA are defined very broadly:
artificial intelligence system means a technological system that, autonomously or partly autonomously, processes data related to human activities through the use of a genetic algorithm, a neural network, machine learning or another technique in order to generate content or make decisions, recommendations or predictions.
While the definition focuses on genetic, neural networks and machine learning algorithms, the definition also includes “or another technique”. While “another technique” may, by statutory interpretation, be qualified by the other items in the list, many cloud-based systems could find themselves fitting within that definition. Notably, the regulators saw fit to exclude certain Ministries (defence and national security related, as well as a yet-to-be-published list of other government bodies), but has not signaled any exclusion for any commercial, media or education industries.
AIDA states that anyone who designs, develops, makes available for use, or manages the operations of any artificial intelligence system is a person responsible for that system.
Any person responsible for an artificial intelligence system that processes or makes available anonymized data (including for designing and developing) an artificial intelligence system will be required to establish measures regarding the manner in which the data is anonymized and the use or management of that data. The details of the requirements for those measures will follow in subsequent regulations. Each responsible person will also be required to keep records available for inspection on those measures.
In the current draft of AIDA only the Minster of Industry may order production or inspection of those records, or conduct an audit thereof (the cost of which is borne by the person, not the Ministry). The Minister may also order a person to address anything referred to in the audit report.
While AIDA does contemplate that some information (meeting the common law definition of confidential information) will be kept confidential, it is important to note that unless it is actually confidential information as determined under the common law of Canada, information from the audit report or records inspected by the Minister may not be confidential and, thus, may be subject to release by the Ministry or a sunshine law public information request. Notably, the government has signaled its ability to “name and shame” those who contravene AIDA on a publicly available website.
“High-impact systems”
AIDA would impose significantly more restrictive requirements on “high-impact systems”. The scope of artificial intelligence systems that are considered high-impact systems will be prescribed in subsequent regulations. Everyone who is responsible for an artificial intelligence system will be required to assess whether their system is a high-impact system and maintain records of this assessment.
If a system is determined to be a high-impact system, the people making the system available or managing it must additionally:
- identify, assess and mitigate risks of harm or biased output that could result from the system. Biased output means outputs, decisions or recommendations from an AI system that “adversely differentiates, directly or indirectly and without justification, in relation to an individual on one or more of the prohibited grounds of discrimination”. Although outputs that are intended to prevent disadvantages of a person or group based on such groups are excluded from being considered ‘biased output’;
- establish measures to monitor the mitigation measures that are implemented for a high-impact system and maintain records available for inspection of the measures;
- publish a plain-language description of the system, including how it will be used, the type of output, decisions or recommendations it will make, and the mitigation measures in place, and
- notify the government if the use the system results or is likely to result in material harm.
As for general artificial intelligence systems, the government may request the supporting documents for the above measures, or order an audit of the artificial intelligent systems and the mitigation measures in place. For high-impact systems, though, the systems may be ordered to be shut down if there is a serious risk of imminent harm, defined as physical or psychological harm to an individual; damage to an individual’s property; or economic loss to an individual.
Various penalties may be imposed for administrative violations (the amounts of will be specified in yet-unseen regulations) although AIDA also includes criminal offences for more serious activity. Criminal penalties may reach $25 million or 5% of global revenues for knowingly or recklessly using an artificial intelligence system that is likely to cause serious physical or psychological harm to an individual and it does cause such harm,
The accompanying Consumer Privacy Protection Act introduced alongside the AIDA also contemplates regulating “automated decision systems” having a “significant impact” on individuals using personal (that is, non-anonymous) information. Under this proposed legislation, an “automated decision system” is a system that replaces human judgment with a “rules-based system, regression analysis, predictive analytics, machine learning, deep learning, a neural network or other technique”. The Consumer Privacy Protection Act would require that organizations provide, on request, an explanation of the prediction, recommendation or decision produced by the automated decision system, including the type of information used and the reasons or principal factors that led to the prediction, recommendation or decision.
What is next?
Bill C-27 has only recently been introduced and no coming-into-force date for AIDA has been announced. We will continue to monitor its progress and will be sharing updates.
While AIDA provides an outline of the requirements for artificial intelligent systems, significant details will only be included in the supporting regulations. These details include the definition of ‘high-impact systems’ to which the additional mitigation and monitoring applies, as well as the types of mitigation measures that would be appropriate — not to mention the quantum of administrative penalties. Likely, there will be public consultations on the implementation details in the regulations.
Organizations developing or contemplating artificial intelligence systems should carefully consider what procedures may be appropriate for mitigating biased output, monitoring of mitigation measures, as well as the contemplated record keeping, under this proposed legislation.
We expect there will be significant analysis and debate in the coming days around this legislation.
This article provides only general information about legal issues and developments, and is not intended to provide specific legal advice. Please see our disclaimer for more details.