Skip to content Skip to main menu Skip to utility menu
West Block of Parliament

Consultations on Addressing Harmful Content Online

July 29, 2021

The Government of Canada today announced consultations on its proposed approach to make social media platforms and other online communications services more accountable and more transparent when it comes to combating harmful content online, specifically

  • terrorist content;
  • content that incites violence;
  • hate speech;
  • non-consensual sharing of intimate images; and
  • child sexual exploitation content.

The Government proposes a new legislative and regulatory framework that would create rules for how social media platforms and other online services must address harmful content. The framework sets out:

  • which entities would be subject to the new rules;
  • what types of harmful content would be regulated;
  • new rules and obligations for regulated entities; and
  • two new regulatory bodies and an Advisory Board to administer and oversee the new framework and enforce its rules and obligations.

The proposal is outlined in two documents:

  1. discussion guide that summarizes and outlines the Government’s overall approach.
  2. technical paper that summarizes the proposed instructions to inform the upcoming legislation.

Both documents are presented with thematic modules. Each module outlines a component of the Government’s proposed approach.

Comments can be submitted by email to pch.icn-dci.pch@canada.ca.

Highlights

Who and what would be regulated

New legislation would apply to ‘online communication service providers’.

The concept of online communication service provider is intended to capture major platforms, (e.g., Facebook, Instagram, Twitter, YouTube, TikTok, Pornhub), and exclude products and services that would not qualify as online communication services, such as fitness applications or travel review websites.

The legislation would not cover private communications, nor telecommunications service providers or certain technical operators. There would be specific exemptions for these services.

The legislation would also authorize the Government to include or exclude categories of online communication service providers from the application of the legislation within certain parameters.

New rules and obligations

The new legislation would set out a statutory requirement for regulated entities to take all reasonable measures to make harmful content inaccessible in Canada. This obligation would require regulated entities to do whatever is reasonable and within their power to monitor for the regulated categories of harmful content on their services, including through the use of automated systems based on algorithms.

Once platform users flag content, regulated entities would be required to respond to the flagged content by assessing whether it should be made inaccessible in Canada, according to the definitions outlined in legislation. If the content meets the legislated definitions, the regulated entity would be required to make the content inaccessible from their service in Canada within 24 hours of being flagged.

Regulated entities would also be required to establish robust flagging, notice, and appeal systems for both authors of content and those who flag content. Once a regulated entity makes a determination on whether to make content inaccessible in Canada, they would be required to notify both the author of that content and the flagger of their decision, and give each party an opportunity to appeal that decision to the regulated entity.

Engaging law enforcement and CSIS

The construction of a regulatory framework with content removal requirements involves consideration of the interplay between new regulations and the role of law enforcement and CSIS in identifying public safety threats and preventing violence. Removal alone may push public threat actors beyond the visibility of law enforcement and CSIS, to encrypted websites and platforms with more extremist and unmoderated harmful content.

Establishment of new regulators

The proposed legislation would create a new Digital Safety Commission of Canada to support three bodies that would operationalize, oversee, and enforce the new regime: the Digital Safety Commissioner of Canada, the Digital Recourse Council of Canada, and an Advisory Board.

The Digital Safety Commissioner of Canada (the Commissioner) would administer, oversee, and enforce the new legislated requirements noted above. It would also be mandated to lead and participate in research and programming, convene and collaborate with relevant stakeholders, and support regulated entities in reducing the five forms of harmful content falling under the new legislation on their services.

The Digital Recourse Council of Canada (Recourse Council) would provide people in Canada with independent recourse for the content moderation decisions of regulated entities like social media platforms. Once users have exhausted all avenues for appeals within regulated entities themselves, they would be able to take their case to the Recourse Council for decision. The Recourse Council would provide independent and binding decisions on whether or not content qualifies as harmful content as defined in legislation and should be made inaccessible.

Modifying Canada’s existing legal framework

In addition to the legislative amendments proposed under Bill C-36, further modifications to Canada’s existing legal framework to address harmful content online could include:

  • modernizing An act respecting the mandatory reporting of Internet child pornography by persons who provide an Internet service, (referred to as the Mandatory Reporting Act) to improve its effectiveness; and
  • amending the Canadian Security and Intelligence Service Act to streamline the process for obtaining judicial authority to acquire basic subscriber information of online threat actors.

 

Add a new comment