Improve your detection and simplify moderation - in one AI-powered platform.
Stay ahead of novel risks and bad actors with proactive, on-demand insights.
Proactively stop safety gaps to produce safe, reliable, and compliant models.
Deploy generative AI in a safe and scalable way with active safety guardrails.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Stay ahead of industry news in our exclusive T&S community.
Learn more about ActiveFence’s multilayered approach to AI Safety:
The production, distribution, and consumption of Child Sexual Abuse Material (CSAM) have been pressing societal issues for decades. The widespread use of the internet, accelerated by the proliferation of file-sharing websites, social media, and most recently, generative AI (GenAI), has made the situation even worse.Â
In the digital age, CSAM can be created and rapidly shared in large quantities, presenting a formidable challenge for Trust & Safety (T&S) professionals. The rapid production and sheer volume of this material can make it difficult to detect and eliminate—daunting yet critical tasks. Manual moderation is insufficient to handle the scale and speed of CSAM distribution, requiring automated, tech-driven solutions to keep pace with this evolving threat and protect vulnerable individuals from exploitation.
Image hash matching has been a cornerstone in the fight against online CSAM. This technology works by creating unique digital fingerprints, or hashes, for images or videos of CSAM. These hashes are long strings of numbers that represent the content, allowing systems to compare them against a list of known CSAM hashes without needing to store or view the original material. When a match is found, the content can be blocked across various platforms. This algorithmic approach enables the analysis of vast amounts of graphic content without needing to manually view it, ensuring high precision and efficiency.
Beyond its precision and efficiency, CSAM hash matching benefits both human moderators and victims depicted in abusive material. It reduces the need for human reviewers to repeatedly view disturbing images, protecting their mental health. It also minimizes the ongoing re-victimization of individuals by removing the illegal images from view and preventing their duplication and resharing.
There are two main types of hashes used: exact match hashes, which identify identical images, and perceptual hashes, which recognize visually similar images. Both are used in the fight against online CSAM. CSAM production often involves duplication, minor adjustments, and recycling of material featuring past victims. With perceptual image hashing, once content is identified as malicious, this identification helps detect duplicates and prevents their resharing on the same platform.
The National Center for Missing and Exploited Children (NCMEC) is the largest and most influential child protection organization in the US. It maintains a comprehensive database of CSAM hashes., which is a central resource for T&S teams across various platforms to compare, detect, and remove illicit content. NCMEC also operates the CyberTipline, which, under federal law, requires US tech companies to report any apparent CSAM incident on their platforms. This helps enrich NCMEC’s database and aids in preventing future offenses. Â
Other countries and jurisdictions have their own distinct databases, such as the Child Abuse Image Database (CAID), and global nonprofits like Project Victim Identification Center (VIC) also contribute to these efforts.
Despite its utility, image hash matching has notable limitations. One major challenge is its reliance on existing databases of known CSAM, so new and unindexed CSAM can evade detection. Simple picture manipulations like cropping, rotating, or adding filters can alter an image enough to create a new hash, allowing perpetrators to bypass hash-matching detection systems. False positives can occur when non-CSAM content coincidentally matches a hash in the database, leading to unnecessary investigations.
The rise of GenAI has introduced a new layer of complexity, with predators using the technology as a literal playground. Utilizing easy-to-use text-to-image models, minor prompt alterations can create endless variations of the same CSAM piece, making it difficult to detect with traditional hash-based methods. Moreover, our research indicates more sophisticated predators retrain open-source GenAI models to create novel CSAM at a massive scale, using both real and fictional children created with GenAI. They also use deepfakes that depict children’s faces in sexual activities—hash matching falls short here as well.Â
This adversarial space underscores the urgent need for more sophisticated and adaptive detection methods to combat CSAM in today’s rapidly changing digital age.
To learn more about the new and ever-evolving ways child predators are misusing GenAI, download our full report on Child Predator Abuse of Generative AI Tools.
Large multinational companies like Google and Apple recognize the critical importance of addressing CSAM on their platforms. They actively share resources and strategies to detect, remove, and report this content, deploying new technologies and methods to adapt to the evolving landscape.Â
Hash matching mainly falls short when dealing with new, previously uncatalogued content. To bridge this gap, combining hashing with advanced computer vision algorithms is essential. Computer vision technologies can analyze and interpret images and videos at a much deeper level, identifying specific features indicative of CSAM that hash matching might miss—such as certain patterns, textures, and objects.
Computer vision can be trained to recognize specific elements commonly found in CSAM, including:
To tackle the challenge of novel, AI-generated CSAM, advanced AI models are essential. These models, built with machine learning and deep learning techniques, go beyond hash matching to identify new, unindexed content. They can learn from vast amounts of data and improve over time, detecting subtle patterns and anomalies indicative of CSAM.
ActiveScore, for example, ActiveFence’s detection automation solution, uses AI to enhance detection accuracy.
The primary advantage of AI models in CSAM detection is their ability to identify new and previously unreported content by combining and layering multiple detection algorithms. These models can also significantly reduce false positives, a common issue with traditional detection methods.Â
For instance, ActiveScore’s CSAM detector has a false positive rate of less than 0.02%. By refining detection accuracy, AI models help ensure genuine content is not mistakenly flagged, maintaining the balance between vigilance and user privacy​​.
As mentioned earlier, multinational companies have already started using combined detection strategies, highlighting their effectiveness. For instance, Microsoft developed a tool that identifies child predators who groom children for abuse in online chats by using a combination of hashing, computer vision, and AI.Â
Similarly, Facebook implemented integrated AI and machine learning algorithms into its content review processes, enabling the platform to detect and remove CSAM more swiftly.
At ActiveFence, ActiveScore’s CSAM detectors can identify novel CSAM-related violations across various child abuse areas and different modalities like video, image, and text. For detecting text-related violations, our detectors are trained on over 10 million proprietary sources of online chatter and can spot indicators of child predator behaviors, including the use of specific keywords and emojis, multilingual terminology, and GenAI text prompt manipulation techniques.
In a recent case study, ActiveScore uncovered a CSAM-promoting group hidden within a seemingly benign profile on a social platform by analyzing the image against its intel-fueled database. This led to the immediate flagging and removal of the dangerous user.Â
While image hashing and matching have been effective, they can still be augmented with additional technology to better detect new forms of CSAM, especially in the GenAI era. Due to the method’s vulnerability to alterations, integrating AI models and computer vision technology is crucial for more effective detection.
Future technological advancements will further enhance CSAM detection by identifying subtle features and patterns that traditional methods miss. These technologies will be key in countering the evolving tactics of child predators, particularly as generative AI continues to evolve.
User-generated content (UGC) platforms must adopt combined detection strategies and deploy advanced tools to combat the spread of CSAM. By integrating these sophisticated models or deploying off-the-shelf solutions, companies can better protect vulnerable populations and make the internet safer for everyone.
Integrate advanced AI models in your platform’s CSAM detection frameworks –
Discover the hidden dangers of non-graphic child sexual exploitation on UGC platforms and learn to combat these pervasive threats.
Companies with online communities can help improve child safety online through these 5 actionable strategies from ActiveFence.
ActiveFence reviews actions platforms can take to keep children safe online.