Why Content Moderation Isn’t Censorship

By
November 23, 2022
Are censorship and content moderation synonymous?

A misunderstood concept, content moderation is an important aspect of ensuring the safety and security of platforms. Discourse in the media tends to liken it to censorship, which devalues its purpose. While the two terms are often conflated, content moderation and censorship aren’t the same thing, and the notion that they are isn’t just false, but can be dangerous as well.

So What’s What?

Censorship is the suppression of ideas, information, images, and speech, and is codified in a government-level policy that applies to all individuals in a given space or territory. It’s used by both democratic and totalitarian regimes alike, to varying ends. Typically, we think of censorship as simply limiting speech, but it’s also used as a tool of national security, or as a means to uphold public morality, such as rules regarding nudity in film and television. An absolutist policy, censorship is top-down, and punishment for violating it can result in a variety of penalties, from fines to legal action, including jail time.

Unlike censorship, content moderation refers to policies put in place by private platforms – not the government – and applies only to the users of those platforms. Moderation, in practical terms, means the identification of violative content and the application of a respective policy with regard to that content, whether by adding a warning label, blurring an image, suspending an account, removing a post, or banning a user. Its aim is to protect users and maintain community standards. While censorship is unilateral, each platform decides its own moderation policy, thus there’s a range of what’s allowed and prohibited across sites and apps. Unlike censorship, which takes a black-or-white approach, moderation decisions often require a great deal of nuance: while some decisions are clear-cut, others necessitate laborious discussion, considering the implications of a post’s potential influence. Censorship is yes or no, moderation is a big, fat…maybe.

Sensitive Content

One of the most important distinctions to make between the two is that content moderation only applies to individuals who choose to engage in places where policies exist. Censorship, when applied to speech, denies a fundamental right, whereas moderation happens in a place of privilege. Users who disagree with a platform’s moderation policy or tactics can simply opt to use another platform; those who live in a society where speech is censored don’t have that option.

Black and White and Gray All Over

Both censorship and content moderation deal with the issue of what’s allowed and what’s not, and while in theory both are quite clear about their respective boundaries, in the case of moderation, there’s a considerable gray area.

There are the clear red lines, of course: anything to do with child sexual abuse material (CSAM) is broadly forbidden across all mainstream platforms. But even when it comes to CSAM, there’s room for debate about what exactly constitutes that material. Aside from the obvious, Trust & Safety teams need to understand what types of content might possibly be used as CSAM, and develop a policy – and enforcement mechanisms – around those items as well. In Japan, for example, some material depicting underage models in a sexual context is legally allowed, though this type of imagery is illegal elsewhere. Such material is often sold in online marketplaces, and the platforms hosting them must be wary of what’s available where, so as to not break the laws of a specific country.

To take another example, seemingly harmless photos parents might post of their children in bathing suits aren’t sexually explicit or depict nudity, but they’re liable to be misused by threat actors. The same goes for a video of the birth of a baby, filmed perhaps for an anatomy lesson or health class, or a Renaissance painting depicting a child in the nude. A platform policy that prohibits nudity – of children or otherwise – isn’t quite nuanced enough to grasp the differences between these examples and something more explicit or obvious. Thus, moderators need to understand the nuance of these; enforcing an all-or-nothing policy, like censorship does, would include potentially removing content that isn’t actually violative.

Venus and Cupid

The gray area in content moderation extends beyond “simple” violations, like those of nudity and profanity. Moderators need to take into account not only the content of a specific post, but its effect on the users who may interact with it. Social media ‘challenges’ that have become popular access platforms pose exactly this type of problem. While often harmless, some, like the ‘Tide Pod Challenge,’ are dangerous, and potentially fatal. When it comes to instances like these, moderators need to decide how to balance the freedom of doing something asinine with the potential harm that can come with the spread of such behavior. Where does free speech end and the protection of public safety begin? Replace the Tide Pods with razor blades, and think again: Where does the line get drawn? Is the principle of free speech enough justification for allowing content that encourages self-harm to remain online?

Tide Pods

These are questions that censorship doesn’t need to deal with, but content moderation absolutely does. It’s in situations like these where the good nature of content moderation becomes clearer. When it came to the Tide Pod Challenge, some platforms have removed videos and posts encouraging users to try to eat the candy-like detergent packets, in an effort to curb their spread and protect public safety. Moderation’s aim to make platforms safer for users, then, can certainly be positive.

Popular Discourse On Moderation And Censorship

In the last few years, there’s been a tendency to co-opt the term ‘censorship’ and apply it unilaterally across the board. And while moderation isn’t censorship, it gets called that anyways, in an effort to delegitimize it.

One of the most common arguments against moderation is that its aim is to remove posts containing political opinions contrary to those held by a platform. While censorship bans outright the publication of certain opinions, content moderation leaves the floor open for differing voices. The idea that a platform prohibits posts indicating a political leaning one way or the other is, in truth, nonsense. Facebook, for example, doesn’t moderate organic content posted by politicians; the platform says that limiting free speech from political leaders, candidates, or appointees would “leave people less informed.” Similarly, posts that could be conceived as misinformation on the platform are allowed to remain, so long as they don’t deal with a few specific topics, like voter and census information, or issues of public health and safety. Meta took steps in 2021 to rejig its algorithm so users’ feeds would show less political content, indicating a moderation route that wasn’t simply to leave things up or take them down.

In reality, no platform explicitly forbids any political opinion; what they may ban, however, is the hate speech and extremist rhetoric that sometimes comes hand-in-hand with politically-charged posts. Claiming the US 2020 Presidential Election was stolen might be fair game, but encouraging a violent insurrection in response to it is not. Posts in support of civil rights groups, like Antifa, are allowed, until those groups advocate violence; in those cases, that content will be in violation of a platform’s policy. Consider that according to research by the Cato Institute, left-leaning users are three times more likely to reporting an offending post or user than their conservative counterparts. Because this group makes more use of the ability to flag posts, it’s therefore unsurprising that Trust & Safety teams would end up seeing and potentially removing more conservative content than liberal. That same study found that 35% of conservatives had had a post removed on a platform, compared to 20% of liberals; given some of the reasoning behind those numbers, it’s blatantly clear that accounts, posts, and groups belonging to both sides of the aisle are liable for review.

Reported content

 

Moderators, though, deal with much more than monitoring potentially violative political speech, though this aspect of the work of Trust & Safety teams tends to get the most press. When moderation efforts are framed as censorship, the public loses sight of the positive work that these teams do. A platform without any moderation wouldn’t be a safe place for users at all: it would be, simply put, chock full of threat actors. If you value opening your favorite apps and not being inundated with spammy prostitution advertisements, incitements to terrorism, child pornography, financial scams, and human exploitation schemes, you have content moderation to thank.

The Good, The Bad, The Everything In Between

Censorship and content moderation are both incredibly complex issues, but conflating the two only further inflames tensions around a practice meant to create, maintain, and enhance the safety of a given space. The way they’re conceptualized and carried out are different; their aims are different, as are their effects. Delegitimizing the work of content moderators only serves to escalate the existing misunderstandings of what these important teams do. The co-opting of the word ‘censorship’ by those who have faced punishment for violating a platform’s rules continuously is about as legitimate as the boy who cried wolf. Users don’t get to make the rules of the platforms they join; in joining, they agree to abide by the ones that already exist. Those who don’t want to play by someone else’s rules, in fact, don’t have to play. Unlike in societies with censorship, they have the option to take their speech and rhetoric elsewhere. But by claiming they’re being censored by being told they’re abusing a platform or violating its rules, they’re needlessly – and incorrectly – vilifying the work of Trust & Safety teams.

Table of Contents

Curious about effective content moderation? Explore how our platform helps maintain balance without censorship.

Discover Our Solutions