Techno Blender
Digitally Yours.

The Online Harms Act doesn’t go far enough to protect democracy in Canada

0 7


Credit: Pixabay/CC0 Public Domain

The Liberal government’s recent proposal for regulating social media platforms, the Online Harms Act (Bill C-63), comes as the final act in a promised trilogy of bills aimed at bringing some order to the digital world.

After contentious attempts to address the fallout from the Online News Act and the threat from online streaming platforms to Canadian content, this final bill attempts to identify and regulate harmful content. The Online Harms Act follows Europe, the United Kingdom and Australia in setting up a new regulator in an attempt to address the spread of what is considered harmful content.

The idea that such efforts are necessary is not controversial—content that sexually exploits children, for instance, has already been a target for law enforcement, and hate speech has been illegal for decades in most industrialized democracies.

Platform responsibility

Online harms laws are based on the idea of “intermediary liability“: making the platforms legally responsible when users use them to distribute content that breaks laws.

Under the Online Harms Act, platforms will be required to promptly remove two forms of content—that which “sexually victimizes a child or revictimizes a survivor” and “intimate images posted without consent”—or face large fines.

But it also includes less strict measures to deal with other forms of harmful content, including promotion of terrorism or genocide, incitement to violence or hate speech. Platforms will be required to develop, and make public, plans to “mitigate the risk that users will be exposed to harmful content on the services and submitting digital safety plans to the Digital Safety Commission of Canada.”

Crime and punishment

There are also new criminal offenses and penalties for users who upload these forms of content. These provisions have been the subject of much of the debate over the bill.

Many civil libertarians argue that they go too far, while advocates for marginalized groups believe that they are long overdue.

But much of the debate over these specific details misses a deeper failing of the bill, which derives from the way the idea of “online harm” is understood.






CBC News looks at the Online Harms Act.

‘Lawful but awful’

For much of the last decade, digital media scholars have also been directing attention to different ways in which platform communication ought to be considered harmful. The definition of harmful content in Bill C-63 focuses on harms that are experienced by users when they encounter particular forms of content posted by others.

But platforms aren’t merely empty spaces for users to send messages to other users—they play an active role in shaping the communication that takes place, determining how messages are combined and sorted, and how their distribution is prioritized and limited.

For this reason, algorithms that amplify or suppress particular kinds of messages should also be seen as a source of harm.

This is often understood as the reason why fake news or hyper-partisan political commentary is so problematic on platforms. Even perfectly legal communication—what is called “lawful but awful” content—can contribute to a pattern of serious harm.

One person denying the scientific consensus on vaccines, promoting entirely baseless conspiracy theories about political figures or discouraging people from voting, might not be “harmful” in the sense that Bill C-63 defines the concept.

But when social media algorithms ensure that many users don’t see counter-evidence from outside their “filter bubble,” the dangers are real. This is also true of any number of other kinds of platformed deception, such as AI-generated deep fake videos of political candidates.

Democracy at risk

Democracy relies on open and rational deliberation. The conditions for that kind of communication can be degraded by the way that algorithms operate. That algorithms are operated by private, for-profit corporations that seek to maximize “engagement” makes the problem even worse; this creates an incentive for content that provokes outrage and further polarizes political opinion.

Exactly how algorithms should be regulated is not a simple question. Some of the provisions in Bill C-63 might be a step in the right direction: requirements for risk mitigation plans, an ombudsperson who can help the public submit complaints about platforms to a regulator and obligations to provide information about content. And importantly, all of this can be done without unnecessarily violating users’ freedom of expression.

But a more specific legal obligation on platforms to deprioritize content that is clearly false—such as public health messaging or information related to elections—would be necessary to stop increasing online polarization and promoting anti-democratic populism.

While the Online Harms Act might protect individuals from being exposed to specific kinds of content, protecting the democratic nature of our society will require a more robust set of regulations than what has been proposed.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Opinion: The Online Harms Act doesn’t go far enough to protect democracy in Canada (2024, March 20)
retrieved 20 March 2024
from https://phys.org/news/2024-03-opinion-online-doesnt-democracy-canada.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.




law
Credit: Pixabay/CC0 Public Domain

The Liberal government’s recent proposal for regulating social media platforms, the Online Harms Act (Bill C-63), comes as the final act in a promised trilogy of bills aimed at bringing some order to the digital world.

After contentious attempts to address the fallout from the Online News Act and the threat from online streaming platforms to Canadian content, this final bill attempts to identify and regulate harmful content. The Online Harms Act follows Europe, the United Kingdom and Australia in setting up a new regulator in an attempt to address the spread of what is considered harmful content.

The idea that such efforts are necessary is not controversial—content that sexually exploits children, for instance, has already been a target for law enforcement, and hate speech has been illegal for decades in most industrialized democracies.

Platform responsibility

Online harms laws are based on the idea of “intermediary liability“: making the platforms legally responsible when users use them to distribute content that breaks laws.

Under the Online Harms Act, platforms will be required to promptly remove two forms of content—that which “sexually victimizes a child or revictimizes a survivor” and “intimate images posted without consent”—or face large fines.

But it also includes less strict measures to deal with other forms of harmful content, including promotion of terrorism or genocide, incitement to violence or hate speech. Platforms will be required to develop, and make public, plans to “mitigate the risk that users will be exposed to harmful content on the services and submitting digital safety plans to the Digital Safety Commission of Canada.”

Crime and punishment

There are also new criminal offenses and penalties for users who upload these forms of content. These provisions have been the subject of much of the debate over the bill.

Many civil libertarians argue that they go too far, while advocates for marginalized groups believe that they are long overdue.

But much of the debate over these specific details misses a deeper failing of the bill, which derives from the way the idea of “online harm” is understood.






CBC News looks at the Online Harms Act.

‘Lawful but awful’

For much of the last decade, digital media scholars have also been directing attention to different ways in which platform communication ought to be considered harmful. The definition of harmful content in Bill C-63 focuses on harms that are experienced by users when they encounter particular forms of content posted by others.

But platforms aren’t merely empty spaces for users to send messages to other users—they play an active role in shaping the communication that takes place, determining how messages are combined and sorted, and how their distribution is prioritized and limited.

For this reason, algorithms that amplify or suppress particular kinds of messages should also be seen as a source of harm.

This is often understood as the reason why fake news or hyper-partisan political commentary is so problematic on platforms. Even perfectly legal communication—what is called “lawful but awful” content—can contribute to a pattern of serious harm.

One person denying the scientific consensus on vaccines, promoting entirely baseless conspiracy theories about political figures or discouraging people from voting, might not be “harmful” in the sense that Bill C-63 defines the concept.

But when social media algorithms ensure that many users don’t see counter-evidence from outside their “filter bubble,” the dangers are real. This is also true of any number of other kinds of platformed deception, such as AI-generated deep fake videos of political candidates.

Democracy at risk

Democracy relies on open and rational deliberation. The conditions for that kind of communication can be degraded by the way that algorithms operate. That algorithms are operated by private, for-profit corporations that seek to maximize “engagement” makes the problem even worse; this creates an incentive for content that provokes outrage and further polarizes political opinion.

Exactly how algorithms should be regulated is not a simple question. Some of the provisions in Bill C-63 might be a step in the right direction: requirements for risk mitigation plans, an ombudsperson who can help the public submit complaints about platforms to a regulator and obligations to provide information about content. And importantly, all of this can be done without unnecessarily violating users’ freedom of expression.

But a more specific legal obligation on platforms to deprioritize content that is clearly false—such as public health messaging or information related to elections—would be necessary to stop increasing online polarization and promoting anti-democratic populism.

While the Online Harms Act might protect individuals from being exposed to specific kinds of content, protecting the democratic nature of our society will require a more robust set of regulations than what has been proposed.

Provided by
The Conversation


This article is republished from The Conversation under a Creative Commons license. Read the original article.The Conversation

Citation:
Opinion: The Online Harms Act doesn’t go far enough to protect democracy in Canada (2024, March 20)
retrieved 20 March 2024
from https://phys.org/news/2024-03-opinion-online-doesnt-democracy-canada.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.

FOLLOW US ON GOOGLE NEWS

Read original article here

Denial of responsibility! Techno Blender is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment