Improve your detection and simplify moderation - in one AI-powered platform.
Stay ahead of novel risks and bad actors with proactive, on-demand insights.
Proactively stop safety gaps to produce safe, reliable, and compliant models.
Deploy generative AI in a safe and scalable way with active safety guardrails.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Stay ahead of industry news in our exclusive T&S community.
In the third edition of the ActiveFence Policy Series, we take a look at the health and electoral disinformation policies of major tech companies and examine their core components. In this blog, we will focus on health disinformation policy.
In 2021, health care authorities are no longer the “go-to” for medical information. The internet is now the first place we turn. This, of course, leads to a climate where false health information is more widespread than ever before. Though the problem of health misinformation is not new, COVID-19 has brought an onslaught of harmful information online, causing unprecedented risk to public health. As the world has witnessed, medical misinformation and conspiracy theories are demonstrably capable of negatively impacting personal and public health, particularly during this pandemic. For instance, a 2020 study published by Cambridge University Press found that people who believed in one or more conspiracy theories regarding COVID-19 were less likely to follow advice to protect their health, such as hand washing or social distancing measures.
During the COVID-19 pandemic, there have been significant and concerted acts of health disinformation spread against the vaccines that were produced to mitigate the harm of the virus. Additionally, there have also been campaigns and conspiracy theories promoted and amplified directed to challenge the public health guidelines implemented by health authorities to reduce the transmission of the virus. ActiveFence investigated a number of these and discussed these disinformation campaigns in our report here.
The consequences of these campaigns have been dire. Significant portions of society are refusing to vaccinate, and alternative medicines, which can at best be useless and at worst harmful, are being regularly taken in place of regulated medicines. The figures are stark. In the US, it was reported that 98.3% of hospital admissions due to COVID-19 were among unvaccinated people in July 2021.
As the conspiracy theories surrounding COVID-19 and the vaccines created against the disease spread and change, the need for policy is clear. Platforms must either create general rules against health disinformation or frequently update their policy guides as online trends evolve to assist their moderators in identifying and counteracting the spread of this dangerous disinformation.
Policy Challenges
The global pandemic has put social sharing platforms under more pressure than ever to keep their platforms clear of disinformation. From calls to social media platforms to alter their algorithms to President Biden stating that these platforms are killing people, social sharing platforms are constantly in the spotlight.
With the pressure on, social sharing platforms must respond to the growing threat of COVID-19 disinformation. While platforms want to protect their users from misinformation, the task is not simple. While the pandemic continues apace, the deaths continue to climb, and new disinformation campaigns continue to grow and develop. This places platforms in the position of needing to facilitate dissent and debate – essential facets of democracy – while protecting users from harmful content.
Social Media Platforms and Conspiracies
In the previously mentioned study by Cambridge University Press, it was found that there is a correlation between the belief in conspiracy theories and the use of social media as a source of information about COVID-19. For example, people who believed one or more conspiracy theories were more likely to use social media as a source of information than traditional media such as newspapers or the radio. Additionally, people who used one or more social media platforms to gather information about COVID-19 were more likely to believe in a conspiracy theory.
As a result, the leading platforms have taken significant action to prevent harmful and false information from being spread via their platform. Some platforms have taken the approach of forming policy around specific theories, such as banning content that claims the vaccine implants a 5G microchip along with the COVID-19 vaccine. However, other platforms create policies that are more general, such as banning misinformation content about the efficacy and safety of COVID-19 preventative measures and treatments.
Video Sharing
Medical misinformation has long been a problem on video sharing platforms, and it continues to pose challenges for both users and platforms alike.
The COVID-19 pandemic has exacerbated this issue, pushing technology platforms to pursue new and innovative ways of combating false and misleading medical information. Video sharing platforms have developed two models for tackling this form of disinformation. The first is to generally ban claims “that may cause harm to public health,” which gives moderators significant scope for action. The second is to identify and prohibit specific instances of health disinformation clearly.
The Ongoing Challenge
These complex and sensitive issues continue to evolve as new COVID-19 misinformation arises and online behaviors continue to change. Due to the challenge of navigating these changes, ActiveFence’s research team continues to monitor all relevant changes and developments in the trust and safety ecosystem. Our third report in ActiveFence’s Policy Series details twenty of the biggest platforms’ disinformation policies to equip Trust and Safety teams with the information needed to protect the public.
For guidelines and examples of health disinformation policy, download our report here.
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.