
No New Laws Required To Hold Social Media Accountable For Illegal Content
September 21, 2020
In a research report released today, Friends of Canadian Broadcasting argues that “in the eyes of Canadian law, social media companies like Facebook and YouTube are arguably publishers, opening the platforms to legal liability for user-generated content.”
News Release: No new laws required to hold social media accountable for illegal content
TORONTO, SEPTEMBER 21, 2020 – In the eyes of Canadian law, social media companies like Facebook and YouTube are arguably publishers, opening the platforms to legal liability for user-generated content, according to Platform for Harm, a new research report released this morning by the watchdog group FRIENDS of Canadian Broadcasting.
The report builds on a legal analysis provided by libel defence lawyer and free speech advocate Mark Donald. Longstanding common law states that those who publish illegal content are liable for it in addition to those who create it. According to Donald, this liability is triggered when publishers know that content is harmful but publish it anyway, or if they fail to remove it after being notified of it.
“Our elected officials don’t need to create new laws to deal with this problem. They don’t need to define harmful content, police social media, or constrain free expression in any new way. All government needs to do is apply existing laws. But if a judge decides that content circulated on social media breaks the law, the platform which publishes and recommends that illegal content must be held liable for it,” says FRIENDS’ Executive Director Daniel Bernhard.
Social media platforms have long argued that they are simple bulletin boards that display user-generated content without editorial control, and that it is not possible to discover illegal content from among the 100 billion daily posts.
Yet Facebook and other social media platforms claim to advertisers that they do indeed have the technology to recognize content users post before it is published and pushed out to others.
In fact, the report finds that platforms like Facebook routinely exercise editorial control by promoting content users have never asked to see, including extreme content that would land any other publisher in court: for example the promotion of illegal acts such as the Christchurch, NZ massacre. They also conceal content from users without consulting them, another form of editorial control.
“Facebook and other social media platforms have complaints processes where they are alerted to potentially illegal or otherwise objectionable content. Yet it is their own community standards, not the law, which dictates whether they will remove a post. Even then Facebook employees say that the company does not apply its own standards when prominent right-wing groups are involved,” says Dr. George Carothers, FRIENDS’ Director of Research.
Executive Summary
This report explores the conditions under which Canadian law would hold internet intermediaries, such as social media platforms, liable for disseminating harmful content. Online harms are diverse and widely evident on social media platforms, including hate speech, terrorism/radicalization, bullying, disinformation, encouragement of self-harm or suicide, among others.
Based on present principles of legal liability in Canadian common law, an internet intermediary that (1) actively promotes user content by way of algorithmic manipulation; and/or (2) receives notice from a prospective plaintiff of unlawful content, is very arguably a “publisher” at law; therefore liable for this content. This analysis is based on the law of “publication,” but is not limited to traditional forms of publishing.
Large social media platforms often present themselves as passive parties to this content. However, certain platforms have demonstrated advanced capacities to understand users and user-generated content before it is posted. These capacities are used for the purpose of targeting users with revenuegenerating content and advertisements, and for the purpose of removing content that platforms themselves deem to be “core” harms. Platforms have strong fi nancial incentives to present users with content that retains their attention, and research in this fi eld demonstrates that hateful content attracts more attention than moderate content. Given their sophisticated prepublication knowledge of content—including harmful content—it is reasonable to ask whether the prevalence of hateful content on such platforms is intentional.
Given these facts, Canadian law very arguably provides complainants with suffi cient grounds to hold intermediaries liable for harms that take place on their platforms. However, the burden falls upon individual complainants to pursue these cases in court—alone. The signifi cant imbalance of power and resources between that of platforms and individuals greatly reduces the prospect of a complainant having their case heard, which reveals a signifi cant barrier to justice.
Online harms are clear and present, as are the fi nancial incentives that deter internet intermediaries from taking meaningful action to end them. If Canadian leaders and policymakers wish to address the issue of online harm, they must intervene.
Summary of Recommendations to Canadian Policymakers
1. Protect freedom of expression and acknowledge its balance with that of other rights. The right to free expression must be upheld for all Canadians, as must protection against defamation, hate speech, and other harmful and illegal communications. Any regulation of internet intermediaries should retain and preserve this balance.
2. Acknowledge that certain online harms are categorically unacceptable. Incitement to suicide, stalking, and threats are emblematic examples. Responding robustly to such online harms is neither an alien nor a draconian principle in Canadian law.
3. Appreciate the platforms’ technical prowess. Internet intermediaries possess advanced tools for targeting content and monitoring users. Regulators should acknowledge and engage these technological capacities.
4. Take a sovereign approach. Policymakers have developed Canadian rules for multinational firms in regulated sectors like telecommunications, finance, aerospace, and defence. Firms operating in the online sphere should be treated no differently.
5. Use enshrined legal principles as a starting point for appropriate regulation. Internet intermediaries appear to meet clear standards for liability that are present in Canadian common law.
6. Ensure the onus does not fall on individuals. Each harmed individual should not be burdened with initiating and financing a lawsuit against an internet intermediary. Consider the potential of a government agency, similar to the Privacy Commissioner’s office, that could take investigative and enforcement actions on the behalf of citizens.
7. Apply meaningful and proportionate sanctions. Policies and penalties should proportionally fit the size of the company and the magnitude of the harm done.
8. Create and enforce strict privacy protections that minimize intermediaries’ ability to collect personal data. Platforms benefit from the circulation of extreme content because it increases user engagement. This increased engagement generates vast amounts of user data that is captured in secret and used to direct content and advertisements at users. Reining in this invasive business model can reduce the circulation of harmful content.
9. Move promptly. Canada is uniquely placed to be a world leader in addressing online harms. By moving quickly, policymakers can shape standards that reflect Canada’s legal regime, values, and priorities.
Add a new comment