|

Add a bookmark to get started

4 de marzo de 202413 minute read

New Canadian law attempts to create a safer online world

The Online Harms Act and what it means for online service providers

“Online harms have real world impact with tragic, even fatal, consequences”, begins Canadian Heritage’s announcement of Bill C-63, the Online Harms Act, which was introduced in the House of Commons of Canada on February 26, 2024. Harmful content—hate speech, terrorist propaganda, and content that sexualizes children, to name a few examples—has proliferated under the internet’s rampant transformation of society, which has amplified the real world impact on Canadians and, in particular, children.

The introduction of the Act attempts to usher in rigorous online protections for internet users. And certainly, protecting vulnerable groups is a laudable cause; but the government’s approach may be expected to face pushback in the political realm, particularly as it relates to expansions to the Criminal Code and Canadian Human Rights Act.

In all, as it relates to the operation of social media services, the proposed Act does provide some welcomed guidance and safeguards; however, the Act has been accused of both going too far and not far enough. For example, it describes harms only arising from social media services, and not private messaging services where arguably the most drastic harm occurs. It also has raised concerns of over-reach in its attempts to overhaul criminal and human rights law to prevent the commission of hate-motivated acts.

This article provides an overview of the current proposed Act. We will continue to provide updates as material changes occur throughout the Act’s journey to legislation.

What is “harmful content” under the Act?

As Michael Geist notes, the Act can rightly be considered in two parts: the first, targeting online social media service providers as well as the prevention of online child pornography (these can be described as the “harmful content” provisions) and the second, more controversially, expanding Criminal Code and Canadian Human Rights Act prohibitions (these can be largely described as “hateful acts” provisions) which activities may be completely independent of online conduct.

It is important to note that the overall goal of the Online Harms Act with regards to harmful content is to promote online safety of Canadians, with specific emphasis on children’s physical and mental health. The act justifiably therefore targets certain types of harmful content for heavy censorship, while aiming to strike a balance between risk mitigation and respecting freedom of expression.

Under the Act, harmful content is broadly defined as: (1) intimate content communicated without consent (including deepfakes); (2) content that sexually victimizes a child or revictimizes a survivor; (3) content that induces a child to harm themselves; (4) content used to bully a child; (5) content that foments hatred; (6) content that incites violence; and (7) content that incites violent extremism or terrorism. While each sub-category is separately defined under the Act, and there are exclusions in certain instances, further refinement is expected in regulations yet to be published.

Who would be impacted?

Online service operators captured under the Act will generally have a heavier burden of compliance, transparency, and accountability compared to today’s standards, and the expansive definitions will likely capture a broad group of operators.

The Act broadly defines “social media services” to include any website or application accessible in Canada, the primary purpose of which is to facilitate interprovincial or international online communication among users of the website by enabling them to access and share content to the public. This would include communicative tools, such as forums, chatrooms, and bulletin boards, that pre-date the current social media era. The Act also specifically targets livestreaming and adult content services, though interestingly (though perhaps sensibly) this drafting seems to require a service to allow users to both access and share such content, emphasizing the two-way type of communication required to be regulated by this Act.

However, not all social media services are regulated by the Act. For a social media service to be a regulated service, it must be a social media service that has a number of users at a threshold that will be set by the regulations (as-yet unpublished, but to be segregated by category of service), or otherwise of a category or character set out in the regulations (yet unpublished, but regardless of the number of users).

What does this mean for operators of social media services?

All operators of social media services must cooperate with the newly-established Digital Safety Commission of Canada, to assist the Commission in determining whether it is regulated or not. The fact that this new Act comes complete with a Commission to oversee and enforce and an ombudsperson to advocate in the public interest seems to speak to the current government’s import placed on the issue of online hate and violence.

The Act also clarifies that mandatory reporting of Internet child pornography legislation applies to social media services, simplifying the mandatory notification process and requiring additional disclosure in cases where content is manifestly child pornography, and extending the limitation period for prosecuting an offence to five years.

For regulated services, the meat of the Act applies. Operators must first implement measures that are adequate to mitigate the risk that users will be exposed to harmful content on their service. For example, operators should have tools and processes to flag harmful content (and permit users to do so), review and evaluate content that may be considered harmful, and remove or make inaccessible any content that is deemed harmful. Nothing in the Act requires an operator to proactively search or find harmful content, except in the case of content that sexually victimizes a child or revictimizes a survivor from being uploaded to the service, in which case the Act notes that the regulations may require the operator to use technological means to prevent this specific type of harmful content.

So what is an adequate measure?The Act prescribes certain factors that help qualify adequacy, such as the size of the service, the effectiveness of the measures, and the technical and financial capacity of the operator, but overall the criteria is wanting. Pending regulations are expected to provide additional clarity around the application of these criteria, though expectations of what is adequate may well evolve as technology allows new solutions.

Many operators likely already have systems in place for flagging and removing harmful content. However, these systems may require updates to comply with the Act. For example, operators would have an obligation to submit a digital safety plan to the Commission in respect of each of the Operator’s regulated services. That plan must include, among other things:

  • a comprehensive description of the operator’s general compliance measures, including any additional measures taken to protect children;
  • information respecting the volume and type of harmful content that was accessible on the service, including the volume and type of harmful content that was moderated;
  • the volume and type of harmful content that would have been accessible on the service had it not been moderated; and
  • the manner in which and the time within which harmful content was moderated.

While operators would be required to make their digital safety plan and general user guidelines publicly available, their inventory of electronic data may remain private.

Although an operator’s intuition may be to scrub their servers of all instances of harmful content discovered, the operator must also preserve certain types of content, and all other computer data such as logs related thereto in the operator’s possession, for a period of one year. After this period, the operator would then need to destroy the content, subject to the Act.

The pending regulations will also prescribe design features, such as age appropriate design, that operators will be required to integrate into their regulated services to protect children. At this time, there is minimal information available concerning these design features, however given what other jurisdictions have tried to do the Canadian government is expected to follow suit (see, for example, California’s Age-Appropriate Design Code Act).

Lastly, the Act would require operators to make a specific resource person available to hear user concerns and then direct them to internal and external resources. The Act also encourages operator employee “whistleblowing”.

What is not covered

Perhaps in an effort to stave off concerns over freedom of expression, this Act expressly excludes private and encrypted messaging services from its scope (though the Criminal Code changes may still apply, see below), and notes, “The duties imposed under this Act on the operator of a regulated service do not apply in respect of any private messaging feature of the regulated service.” Interestingly, a private message includes any feature that enables the person to communicate to a limited number of users determined by the person, as opposed to a potentially unlimited number of users not determined by the person, which has implications for closed circle social media services and communication platforms.

When it comes to truly harmful content, though, drawing a neat line between posting on a social media site and messaging within a social media site may be difficult. In addition, except for the narrow but important instance of content that sexually victimizes a child or revictimizes a survivor, the Act does not require any proactive steps on the part of covered social media platforms to identify, manage, and reduce harmful content on their services. The bill is expected to be negotiated heavily in the political arena, likely leading to additional carveouts once passed.

Penalties

The Act continues the unimaginative but perhaps inevitable perpetually-escalating arms race since the GDPR’s famous four percent of global turnover penalty. General contraventions of the Act could lead to a maximum penalty of not more than six percent of the gross global revenue of the operator or $10 million, whichever is greater. Penalties are significantly higher for specific categories of offences. For example, if an operator contravenes an order from the Commission that requires the operator to publish a notice concerning its violation under the Act, the operator is liable, on conviction on indictment, to a fine of not more than eight percent of the operator’s gross ‎global revenue or $25 million, whichever is greater; or, on summary conviction, to a fine of not more than seven percent of the operator’s gross ‎global revenue or $20 million, whichever is greater.‎

Important (and already controversial) changes to other Acts

Bill C-63 also sets-out certain proposed changes to other Canadian legislation that would amplify the underlying purpose of the Online Harms Act, maybe even beyond the scope of online activities. Consider to be the hateful acts portions of the Bill, these changes have‎ attracted decidedly more controversy.

Certain changes to the Criminal Code would include:

  • allowing any person (with the Attorney General’s consent) to seek a peace bond against someone if they have “fears on reasonable grounds” that a person will commit a hate offense, for a period of not more than 12 months (or, if the person had previously been convicted of a hate offense, two years), which can provide for (a) electronic monitoring devices, (b) home arrest, (c) abstinence from drugs or alcohol, (d) mandatory drug or alcohol tests, (e) a restraining order for speaking with any identified person or going to any place, and (f) a prohibition on possession of firearms;
  • creating of a definition of hatred (if you are curious, it “means the emotion that involves detestation or vilification and that is stronger than disdain or dislike”) while clarifying that a statement “does not incite or promote hatred […] solely because it discredits, humiliates, hurts or offends”;
  • creating another hate crime offence that tags onto the commission of any other crime (but, again, excluding where the act solely discredits, humiliates, hurts or offends the victim); and
  • extending the maximum prison sentence for hate propaganda to five years, and expanding the maximum sentence for advocating or promoting genocide to imprisonment for life.

Proposed changes to the Canadian Human Rights Act would:

  • reinstate the previously-repealed “communication of hate speech” grounds of discrimination, whether communicated via Internet or other means of telecommunication, which discrimination is committed as long as the hate speech “remains public and the person can remove or block access to it” (clearly, an indication of an awareness of social media activities);
  • clarify that someone does not discriminate by communication of hate speech by reason only that they indicate the existence or location of the hate speech (presumably, such as a link or a search result), host or cache the hate speech or information about it (such as a content delivery network or storage host), are a broadcaster (such as a television or radio undertaking), or operate a social media service (such as those regulated by the Online Harms Act); and
  • create penalties against such discrimination up to and including (a) an order to cease the discrimination or take measures to redress or prevent the same, (b) an order to pay compensation up to $20,000 to any victim “for any pain and suffering that the victim experienced” where the person created or developed, in whole or in part, the hate speech to which the compliant relates, and (c) an order to pay a fine up to $50,000 in situations merited by “the nature, circumstances, extent and gravity of the discriminatory practice, the willfulness or intent of that person, any prior discriminatory practices […] and the persons’ ability to pay the penalty.”

What can we expect going forward?

The internet has flourished under a relatively laissez-faire attitude towards user content, promoted to some extent by values of free speech and, to some extent, unique laws in the United States that protect service providers from the harms wrought by their users. However, the world has started to consider the relative benefits versus harms of online services, and social media is now square in regulatory sights.

The Online Harms Act emphasizes the importance of balancing its protective purposes with maintaining freedom of expression, and also addressing privacy-related concerns. However, critics argue the proposed legislation fails to adequately address the practicalities of implementing this balance and is a massive overreach by “Big Brother”. While there are certain blind spots, there are still several review stages before Bill C-63 receives Royal Asset to become law.

Pending regulations to the Online Harms Act are expected to help provide certain necessary visibility. Regardless, it would not be surprising if this Act becomes a hot button for political debate.

When should I seek legal advice?

It is important for any business operating social media services to consider the potential implications on their operations if the Online Harms Act becomes law, and if they become regulated services. The costs of non-compliance could be significant. A lot still needs to develop both in terms of finalizing the legislation and promulgating its regulations, but it is clear that digital safety (in whatever shape or form) is at the forefront of the government’s agenda. Operators should be prepared for possible regulatory changes that could impact their business.

Print