Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Your guide on what to build and buy
The emergence of new global regulations surrounding online disinformation places increased responsibilities on platforms to protect their users and society-at-large from online harm. The diversity of each law impacts a platform differently from country to country. Our interactive map identifies critical disinformation laws around the world to help guide policy teams and ensure compliance.
The internet was created to provide global access to data and information, but what happens when the information shared is false and intentionally misleading?
In treating disinformation, policy teams must walk a fine line in the determination of whether incorrect facts are being shared innocently or maliciously. Next, they must decide whether false information in wide circulation poses a geopolitical risk to issues concerning national security, election integrity, or public health, to name just a few. As the public sphere becomes increasingly active online, national legislatures are also entering the conversation, drafting and enacting legislation that compels platforms to act in various ways.
__________________________________________
Our series of legislative maps offers a guide to the laws that impact technology platforms operating internationally. We have previously released reports on legislation on hate speech and terrorist content online.
The interactive map below provides policy teams with a tool for understanding relevant laws around the world as of November 25, 2022. Click around the map to access these insights. The map and guide will be updated throughout 2023.
For more detailed insights about the laws mentioned here, access our report.
While historically the US-led approach to internet law laid the groundwork for international internet governance, this methodology is fracturing. The US continues to uphold freedom of expression as a near-absolute principle and currently has no federal laws governing disinformation. In contrast, other countries are passing diverse laws with very different requirements across international borders. Additionally, the sheer quantity of newly proposed national regulations worldwide indicates the trend of legal fragmentation and enhanced platform accountability is only growing.
In internet law, jurisdiction is determined by where the information in question is accessed, and not where it is stored. Since national disinformation ecosystems are increasingly blurring, each new law must be understood by policy teams as multiple competing regulatory systems may apply to a single piece of content at the same time.
In many countries, current laws hold ISPs and online user generated content-hosting platforms accountable for monitoring and rapidly removing disinformation. In some cases, failure to comply risks stringent penalties in the form of hefty fines and even prison terms. The list below provides key insights into the current legal landscape on disinformation. Recognizing where the laws and subsequent penalties diverge will be critical for Trust & Safety teams to establish relevant policy and ensure compliance.
An Increasing Demand for Platform Accountability
The era of limited platform liability for harmful online content appears to be coming to an end. Emerging regulations that govern disinformation increasingly hold online platforms accountable for the removal of harmful content.
A Growing Dichotomy Between Freedoms
In many parts of the world, disinformation regulations bring to light a growing chasm between freedom of expression and freedom from harm. Countries may be signatories to or member states of a governing international legal standard that protects freedom of speech, and yet they have enacted national disinformation laws that stand in contrast to international law by placing a premium on freedom from harm.
Different Definitions of Disinformation
Regulations that prescribe online disinformation content vary by country. Accordingly, laws address a range of threats associated with the intentional spread of false information aimed at targeting the integrity and safety of:
Legislative Frameworks: Proactive Detection vs. Takedown Notices
Ranging from the requirement of platforms to act upon violations, the requirement to identify content on their servers, and no legal requirements at all, legislation frameworks vary. Most countries fall in the middle, requiring platforms to take down violative content once a court has requested removal.
Many countries invoke stringent penalties for failure to comply with orders, and in some countries, the government may order throttling or block access to the platform altogether as a result of noncompliance.
As new disinformation regulations emerge, policy teams must stay up to date on international laws governing internet usage that may directly impact their platforms in countries where they operate. Our map and detailed report of these laws provide guidance for platforms that operate internationally. Remaining adept with regard to increasing demands for platform accountability, a growing discord between freedom of expression and freedom of harm, and an awareness of variances in disinformation definitions from country to country will be critical to ensuring compliance.
In previous editions of our mapping series, we reviewed the regulations governing online terrorist content along with the current legislation that regulates hate speech. Stay tuned for our next legislation map, which will detail global laws governing online child safety.
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.