Discover 3 key automations to optimize your moderation efforts Read 3 Essential Automations for Smarter Moderation

Press Release

ActiveFence Announces New Industry Capability: Detection of Newly Generated Unindexed CSAM

A young child exploring a virtual globe with an interactive display, showcasing a digital representation of planets and space elements.

Learn more about ActiveFence Solutions in Generative AI Safety

Learn More

NEW YORK, July 23rd, 2024 — ActiveFence, the leading technology solution for Trust and Safety intelligence, management, and content moderation, is proud to announce a new industry breakthrough. ActiveFence has developed algorithms that can detect newly generated or manipulated Child Sexual Abuse Material (CSAM), going beyond the detection of CSAM already present in existing databases. 

ActiveFence’s detection automation solution, ActiveScore, utilizes AI to identify harmful content on a large scale. The key advantage of our AI models in CSAM detection lies in their ability to identify new and previously unreported content.

The production, distribution, and consumption of CSAM have been significant societal issues for decades. With the internet’s widespread use and the rise of file-sharing websites, social media platforms, and Generative AI (GenAI), the situation has worsened. CSAM is inherently evasive, as bad actors continuously generate new items and manipulate previously reported ones to evade detection. This makes it nearly impossible for platforms to effectively detect and remove CSAM without the right AI models.

ActiveFence’s AI algorithms identify novel CSAM across different modalities such as video, image, and text. Trained on proprietary sources of data, our text detectors can detect sex solicitation, CSAM discussions, and estimate a user’s age. These detectors also identify the use of specific keywords and emojis, multilingual terminology, and GenAI text prompt manipulation techniques. For image detection, our computer vision detectors can identify indicators of CSAM in images, including identifying specific body parts and estimating age.

VP of Data and AI, PhD Matar Haller: “While image hashing and matching have been effective, they are not enough, especially in the GenAI era, where the bar of entry for creating new and therefore unindexed CSAM has been drastically lowered. Integrating AI detection models is critical to ensure we are able to effectively and efficiently detect at scale.” 

Future technological advancements will further enhance CSAM detection by identifying even more subtle features and patterns that traditional methods often miss. These advancements will play a vital role in countering the evolving tactics of child predators, particularly as generative AI continues to evolve.

To learn more about how ActiveFence safeguards online platforms and users against online harm, please visit our website at www.activfence.com.

About ActiveFence:
ActiveFence is the leading Trust and Safety provider for online platforms, protecting over three billion users daily from malicious behavior and content. Trust and Safety teams of all sizes rely on ActiveFence to keep their users safe from the widest spectrum of online harms, including child abuse, disinformation, hate speech, terror, fraud, and more. We offer a full stack of capabilities with our deep intelligence research, AI-driven harmful content detection and moderation platform. ActiveFence protects platforms globally, in over 100 languages, letting people interact and thrive safely online.

Learn more about ActiveFence Solutions in Generative AI Safety

Learn More