Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Your guide on what to build and buy
With midterm elections quickly approaching, Trust & Safety teams on every major platform face the same conundrum: quickly minimizing the spread and effects of election-related misinformation. While in the past platforms have been reactive to these trends, in the last few years there’s been a more serious effort towards prebunking, or dispelling misinformation before it takes root. A comprehensive strategy to bolster prebunking and reduce misinformation needs to take into account a variety of different tactics.
Unlike debunking, which involves reactive measures to expose false statements, prebunking aims to dispel misinformation before it spreads. It isn’t an entirely new concept, but it’s one that has yet to be fully fleshed out in terms of how platforms implement it as a policy. As a practice, it began to develop after the 2016 presidential election, in response to the organized disinformation campaigns and the ‘infodemic’ that took hold of the country and its most-used platforms. Prebunking gained some momentum during the Covid-19 pandemic, particularly regarding vaccines. With every major event or trend, there’s likely to be swarms of misinformation that run rampant on platforms, and prebunking is one excellent strategy to combat this phenomenon.
Typically, prebunking involves labeling, an integral part of policy enforcement, though it’s not limited to this. Platforms may use automation tools to track relevant keywords and add a warning, a piece of context, or links to external organizations with further information about a particular topic. YouTube, for example, has teamed up with Google’s Jigsaw team to produce prebunking videos on a number of topics, including the war in Ukraine and Covid-19 vaccines. Alongside the videos, YouTube also provides links to organizations like the World Health Organization, which provide accurate information for users, for a well-rounded approach to prebunking as a comprehensive policy. Twitter began prebunking back during the 2020 election, providing users with messages at the top of their feeds about specific election-related information. The platform’s goal was to ensure that all US-based users were being properly informed about voting so as to dispel myths about election fraud. Efforts like these indicate that dispelling rumors, myths, and potentially dangerous information before it spreads may, in fact, be possible.
It goes without saying that prebunking came about as a reaction, not as a proactive strategy. Seeing the widespread disinformation campaigns that were rampant during the 2016 election and the continued spread of misinformation in the years since on a variety of topics, it’s clear that combatting this problem proactively is the best way forward.
This election season, platforms have done some of the following:
With the US midterm elections quickly approaching, platforms should have implemented their prebunking strategies yesterday. That being said, there’s always work to be done. Ensuring users are being presented with accurate information is an important step for platforms to take to guarantee trustworthiness.
Whether by producing videos, suggesting links to reputable sources of information, or taking some other route, platforms should have a strategy for prebunking. One of the simplest ways to incorporate it into a greater content moderation policy is by labeling. By understanding the discourse around a particular topic, Trust & Safety teams can train their automation tools to flag specific keywords or phrases. Those can then be sent to moderators for review, and from there a policy will dictate what’s to be done with them.
But this is only the first step. Language changes, and on the internet, it’s done at nearly lightning speed. It’s no secret that users of a variety of platforms employ euphemistic language and codewords – or what’s known as ‘algospeak’ – in order to fly under the radar of moderation teams. Only by incorporating intelligence that detects those types of linguistic choices and changes can platforms truly combat misinformation before it takes hold. For precisely this type of problem, companies need to be able to constantly monitor different sources of harmful chatter, which might be occurring on or off a platform. Solutions like ActiveFence’s provide real-time access, updates, and analysis about these types of occurrences, enabling platforms to get ahead of trends and implement policies to prevent their spread.
Platforms not yet working with fact-checking organizations should also explore this as another prong in their prebunking strategy. Twitter, for example, publicized that it works with 10 fact-checking organizations, including 5 which work in Spanish, for maximum coverage. Other tools include publicizing links to reputable sources of information, offering context for political or health-related advertisements, and even explicitly refuting misinformation trends.
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.