Policy Series: Electoral Disinformation

By
September 29, 2021
politican speaking to the public

In the third edition of the ActiveFence Policy Series, we examine the core components of the health and electoral disinformation policies of major tech companies. In this blog, we will focus on how platforms combat the spread of electoral disinformation. 

online engagement

The risk from misleading or false information has been significantly amplified in recent years. The world is witnessing more and more large-scale, coordinated information campaigns powered by the proliferation of multiple unregulated ‘news’ outlets. Some recent examples were seen during the 2016 UK Brexit referendum and US presidential election, the 2017 French presidential election, the 2018 Taiwanese local elections, and the 2020 US presidential election. After witnessing the discourse that took place during the 2020 US presidential elections and the violent aftermath, it is clear that political disinformation has reached new heights. As much of today’s disinformation is spread online using tech platforms, many of these companies have drawn up guidelines and policies in response to the abuse of their services. While some platforms are confident to remove material that they determine to be fake, others flag potentially problematic material to warn their users. 

In ActiveFence’s third edition of our Policy Series, we provide a thorough evaluation of the policies of twenty tech platforms that have implemented policies related to electoral disinformation and civic processes. As in our first and second articles outlining tech platforms’ policies, in this article, we provide an overview of how different online platforms navigate election disinformation. 

Policy Challenges

Online platforms of all sizes have created comprehensive community guidelines and policies. Referred to by different names—community guidelines, content policies, and trust & safety policies—all create the ground rules for platform use, outlining what can and cannot be done to develop transparent processes keeping users and platforms safe. 

Creating these policies is a complex task, requiring a thorough evaluation of brand values and intended platform use, an understanding of on-platform activities, monitoring of international legislation, and ongoing analysis of best practices among similar technology companies. Additionally, each platform category has its own set of complications due to the varying types of user-generated content presented. 

Person using a futuristic digital interface with multiple screens and images.

Social Media Platforms

While false and misleading information is an ever-present problem faced by platforms, it becomes particularly pervasive during election cycles. In 2021, Frontiers in Political Science published “Social Media, Cognitive Reflection, and Conspiracy Beliefs,which found that the use of social media as a news source directly correlates to the likelihood of endorsing conspiracy theories.  

As social media platforms are frequently used to share information, they continuously develop guidelines to assist their moderators in keeping users free from deceptive content. While some platforms have developed specific policies, others work with fact-checkers and external organizations to verify claims made during elections.

Instant Messaging 

Differing from social media, where user-generated content is mainly generated for large viewership or public consumption, instant messaging platforms are generally used for smaller group communications between individuals who are already in contact. As a result of the closed nature of the conversations, community guidelines are less specific than those produced by social media companies, with a focus on ensuring that users do not impersonate others or misrepresent the source of a message. 

Video Sharing

Video sharing platforms are a popular source of news and information, with reputable news networks uploading clips and reports from their daily news shows each hour. In addition to the legacy media accounts, there is a multitude of online commentators, comedians, independent journalists, and influencers who are also active in the conversation about current affairs. 

As a result of the number of accounts engaged in sharing media relating to current affairs and elections, video sharing platforms have put in place guidelines to regulate the content and prevent disinformation from being spread unchallenged via their platforms. While some platforms are more specific in their content policies regarding disinformation, others take a broader approach, utilizing more catch-all terminology.

File Sharing

File sharing platforms are often used as a central component in the infrastructure necessary to share disinformation content at scale and across a range of online platforms. Aware that their services could be abused to store materials that could be weaponized to attack the legitimacy of civic processes and elections on other platforms, many file sharing platforms have enacted a number of content prohibitions.

The Ongoing Challenge

These complex and sensitive issues continue to evolve as the online world, behaviors, and political climate change. Due to the challenge of navigating these changes, ActiveFence’s research team continues to monitor all relevant changes and developments in the trust and safety ecosystem. 

Our third report in ActiveFence’s Policy Series details twenty of the biggest platforms’ election disinformation policies to equip Trust and Safety teams with the information needed to tackle electoral disinformation.

For the comprehensive report detailing guidelines and examples of election disinformation policy, download our report.

Table of Contents