Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Your guide on what to build and buy
ActiveFence helps stop misinformation and secure election integrity
As we head into 2024, one topic is at the top of mind for policymakers, regulators, and those working in national security: the rapid spread of false and misleading information. Indeed, the World Economic Forum’s recently released Global Risks Perception Survey 2023-2024 ranked the threat from misinformation and disinformation to be the most significant risk. This concern puts increased pressure on Trust & Safety teams to assess and adequately handle misinformation on their platforms quickly.
The ranking should come as no surprise, given our global society’s vulnerabilities to major geopolitical events (war, international relations, and national elections) coupled with the continued process of social fracturing.
Handling this confluence of events is now more challenging than ever due to the reduction in resources that began in 2023 and continues today, alongside the democratization of AI-generated content creation that has amplified effective and harmful content transmission.
Misinformation can sway public sentiment and inspire or reduce appetite for social participation – effectively moving the needle in elections. As was seen in the US in 2020, Germany in 2022, and Brazil in 2023, it can lead to violent escalations that jeopardize the smooth functioning of democracies.
Understanding trends in misinformation, both the narratives being pushed and the means of their dispersal, is an essential capability for Trust & Safety teams active in this complex global context under stress.
The field of misinformation is incredibly live, and a major challenge for platform policymakers is understanding the best approach to constructing flexible and rigorous user agreements that regulate user-generated content and comply with the EU’s Digital Services Act and the UK’s Online Safety Act 2023.
Platforms should consider the impact of the content within the context it’s being shared, the intent of its being shared, and the implications for platform security that stem from its being shared.
ActiveFence works to flag new harmful narratives, providing the nuanced context for our partners to understand them and their reach to establish the impact of misinformation.
2024 is already charting a course of global security deterioration and the expansion of armed confrontations in Europe, Asia, the Middle East, and Africa. The online presentations of these conflicts see professional actors and interested supporters engaged in information operations to shift public opinions (national and international) and strengthen or weaken military resolve.
These information wars necessitate vast quantities of persuasive content to be created and shared, some based on fact and others on fiction. Trust & Safety teams must understand these narratives to understand if they are ‘innocent rumors’ or malign attempts to influence public opinion.
Take two trends we have flagged up concerning conflicts in 2024.
Narratives surfaced alleging that Western support for Ukraine’s defense is, in reality, a ploy to “exterminate Ukraine’s army” and allow the country to be socially and economically exploited by NATO and private criminal corporations.
The narrative appears to target internal Ukrainian morale and decrease international support for financing Ukraine’s defense.
Pro-Kremilin accounts promote narratives that claim that the October 7th attacks on civilians were faked. Focusing on populations that have already been primed with another false narrative – claims that the March 2022 Russian massacre in Bucha, Ukraine, was fabricated – these accounts promoted similar narratives about Israel.
The focus here is on weakening Israel’s casus belli by spreading doubt on the nature of the Hamas-led October 7 attack.
The mass of overlapping conflicts creates significant issues for the categorization of misinformation, and this is especially true given the electoral context of 2024.
We are undertaking an almost unparalleled democratic experiment where, for the first time, the populations of 83 countries—over 40% of UN members—will vote within the same calendar year.
These elections span the US, India, the EU 27, Mexico, Indonesia, Pakistan, South Africa, Iran, Russia, Taiwan, and South Korea.
Each national ecosystem will be under stress, with the potential for the spread of false information increasing hugely. This situation is not hypothetical. Strategic elections frequently attract misinformation narratives, where various actors (from state actors to conspiracy theorists) seek to influence the votes or prime audiences for future content.
Casting the net to the 2024 elections, we see platform threats from the recorded events in Taiwan and the US.
We saw coordinated claims promoted online alleging that the current DPP government, which the CCP is particularly hostile to, is engaged in a pro-US coup to capture Taiwan’s military and population for use against mainland China.
For example, we identified claims that Vivek Ramaswamy, then a candidate for the Republican nomination and who was politically closest to former President Trump, was a Trojan Horse politician working to steal voters once he had falsely imprisoned former President Trump.
The trends are location manifestations of global stories that attempt to sow distrust in the election procedures that underpin our democracies. The overlap is not incidental, as concepts are taken from one national ecosystem to another, using the preexisting claims to bolster each new assertion.
The events described above are not new, though their sheer volume hugely affects their risk estimation. However, what is new is the digital landscape we find ourselves in: AI was a buzzword in 2020, but today it is a reality.
Threat actors of all complexities can harness the power of Generative AI to produce effective, misleading, emotive content that can go viral.
We see this in the Russia/Ukraine war, where, in 2022, information actors managed to fabricate and distribute a deepfake video of President Zelensky on Ukraine TV, with the announcement, “There is no tomorrow, at least not for me. Now I have to make another difficult decision: To say goodbye to you. I advise you to lay down your arms and return to your families. It is not worth dying in this war.”
In addition to forgeries, content is used to spread emotive, misleading political messaging. This material’s creation has been democratized and played an important role in Argentina’s 2023 election, where candidates attacked one another and spoke to their base using these technologies.
We see the IRGC-backed Ansar Allah Houthi movement and their allies using these same tools effectively to legitimize their attacks on Western-linked shipping in the Red Sea and incite others to act around the world. In the incoming high-risk elections, the consequences could be severe.
Trust & Safety fears about misinformation spreading on their platforms are well founded. The EU has already used the powers afforded by the DSA to launch legal action against X (formerly Twitter), because it hosted illegal content and disinformation surrounding the Israel-Hamas war. If found non-compliant, the company faces a $264M fine (6% of annual global turnover). The legal liability of platforms hosting misinformation is not only present in the EU; the UK’s Online Safety Act 2023 has also created new crimes surrounding the spread of false information online, and strict action obligations exist in countries such as Singapore, India, and Brazil.
To comply with statutory requirements and secure user safety is a multi-pronged intelligence-backed safeguarding detection system.
Learn how to handle the misinformation challenge in an election year:
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.