Back to Search Start Over

Algorithms Patrolling Content: Where’s the Harm?

Authors :
Monica Horten
Source :
SSRN Electronic Journal.
Publication Year :
2021
Publisher :
Elsevier BV, 2021.

Abstract

This paper reveals ways in which algorithms on the Facebook platform have the effect of suppressing content distribution without specifically targeting it for removal, and examines the consequential stifling of users’ speech. At the heart of it is an examination of the colloquial concept of a ‘shadow ban’. This is a term that refers to specific scenario where users’ content is hidden or deprioritised without informing them. The paper reveals how the Facebook shadow ban works by blocking dissemination in News Feed. This is Facebook’s recommender system that curates content for users, and is also the name of the algorithm that encodes the process. The decision-making criteria are based on ‘behaviour’, a term that relates to activity of the page that is identifiable through patterns in the data. It’s a technique that is rooted in computer security, and raises questions about the balance between security and freedom of expression. The paper is situated in the field of research that addresses the responsibility and accountability of the large online platforms with regard to content moderation. It works through the lens of the user to examine the impact of the Facebook shadow ban. Users, whether they are acting as speakers or as recipients of information, have positive rights that must be protected and they should not be treated as passive victims. The user experience was studied over the period of a year from November 2019 to November 2020 across 20 Facebook Pages from the UK. Data provided to the Pages via Facebook Insights was analysed in order to produce a comparative metric, and it was considered how the shadow ban could be assessed under human rights standards. The paper concludes with a recommendation for quality controls on Facebook’s internal processes, potentially with a form of triage to identify genuine, lawful content that has been caught up in the security net. Overall, an improved understanding should be developed around the automated processes and algorithms that are used in content moderation. This is a vital step to safeguarding the online platforms as a forum for public discourse.

Details

ISSN :
15565068
Database :
OpenAIRE
Journal :
SSRN Electronic Journal
Accession number :
edsair.doi...........545a4b0a8e5202084198527699415a73