Non-Graphic and Chat-Based Child Sexual Exploitation: A Platform Blindspot

By , ,
June 20, 2024
Portion of system log from a web server, during cyber attack. Firewall is blocking IP address

Avoid CSAM and Child Safety Violations on your platform

Talk to our experts today

The UK-based Internet Watch Foundation (IWF) has declared 2023 as the “most extreme year on record” for child sex abuse online. 

While this alarming statistic focuses on graphic child sexual abuse material (CSAM) like images and videos, data on non-graphic child sexual exploitation is almost nonexistent. This isn’t due to a lack of such material, but rather its elusive nature, which poses a significant detection challenge for user-generated content (UGC) platforms and their moderators.

Surprisingly, non-graphic child safety offenses are more prevalent on UGC platforms than their graphic counterparts. This is due to their complexity and subtlety, which make them exceptionally difficult to detect and often allow them to fly under the radar.

Types of Non-Graphic Child Sexual Violations

Non-graphic CSAM is a catch-all term that refers to two main types of content:

  • Audio
    In some pedophile communities, audio CSAM is very popular. It includes recordings that detail erotic stories involving minors, narrations of erotic scenes read by children, retellings of scenes depicting the sexual exploitation of minors, or even non-specific sexually suggestive sounds or noises made by children.
  • Text
    Text-based CSAM encompasses a variety of harmful content. This includes grooming, sextortion, and written erotic stories about minors, where predators use unique terminology and code words to avoid detection.

The Threat Landscape: Deploying Non-Graphic Child Sexual Exploitation

The Child Crime Prevention & Safety Center estimates that 500,000 predators are active online every day, putting millions of children at risk. 

These offenders commit a multitude of non-graphic child sexual exploitation offenses, which offer distinct advantages in avoiding detection. The subtlety of this material makes it harder to detect compared to graphic imagery, allowing predators to communicate, interact with and exploit minors, and spread content more easily without getting caught. 

Non-Graphic Child Sexual violations include –

  • Child Sex Trafficking: Traffickers often use mainstream social media as gateways to encrypted platforms and cloud-based services, where they share information about victims, their locations, and the services offered, using specific language and codes to evade detection. 
  • Grooming: Textual content plays a significant role in grooming, where predators manipulate and coerce minors through seemingly innocent conversations that gradually escalate into sexual exploitation. Grooming is becoming more and more widespread on mainstream social media, where predators use these “normal” platforms to make initial contact with victims, and then move the conversation to less-moderated spaces, where they can further manipulate and exploit their targets.
  • Sextortion: Sextortion is another grave violation, where predators use text via online platforms to blackmail children into sharing explicit content or performing sexual acts under the threat of exposure. 
  • CSAM Trade: The distribution of CSAM is a component of a global economy that extends far beyond the dark web and black market. ActiveFence research reveals that CSAM-related transactions frequently take place on the surface web using mainstream payment services and applications. Moreover, it’s not limited to malicious actors; minors are also involved, often trading self-produced CSAM as they seek intimate relationships.
  • Community Building: Pedophiles, like other social media users, seek to build and sustain communities on mainstream platforms. They use esoteric language, codewords, and symbols to identify like-minded users. By blending into the general user population, they attempt to legitimize pedophilia, draw comparisons with marginalized groups, and share pseudoscientific articles supporting adult-minor relationships.
  • Expanding Reach Through Off-Platform Links: Malicious actors often use UGC platforms as gateways, sharing non-graphic CSAM to lure curious viewers and direct them to more private online spaces where graphic CSAM is more easily distributed. Sharing off-platform links is popular because on-platform CSAM is more easily detected by automated tools, human moderators, or user flagging, while off-platform links can be shared more discreetly, making detection harder. The most common ways to share off-platform links include placing them in post descriptions, embedding them in videos or images, or posting them in the comment section.

Types of non-graphic CSAM

Examples of child safety offenses  detected online and combated by ActiveFence

Why are Non-Graphic Child Safety Offenses so Hard to Detect?

UGC platforms use  three main methods to detect and remove graphic CSAM and take action against users:

  1. Image recognition algorithms and hashes: These automatically identify and remove graphic CSAM from uploaded image-based content by comparing it to known hashes or patterns.
  2. User flagging: Users are empowered to report potentially violative content, aiding moderators in addressing issues swiftly by bringing attention to suspicious material.
  3. Human moderator review: Suspicious or flagged content undergoes manual review by human moderators who determine if it violates platform guidelines. Appropriate actions, such as content removal or account bans, are taken based on their findings.

Why These Methods Don’t Work for Text-Based CSAM:

  1. Image detection algorithms and hashes are ineffective for text. Traditional text-based detectors, like keyword flagging systems, struggle with elusive terminology and evolving lingo designed to evade detection. 
  2. User flagging is ineffective for non-graphic CSAM. While it works for material that some find offensive, it fails when the viewer wants to see the content. Pedophiles, who seek out non-graphic CSAM, are unlikely to report it, thus bypassing the flagging system.
  3. Platforms often rely on manual review, but moderators usually lack the expertise to identify this type of content unless it is very explicit. The use of evasive language further complicates detection and moderation efforts.

While less common than text-based abuses, audio-based abuses are explicit and theoretically easier to detect. However, these abuses often go unnoticed simply because platforms don’t monitor audio content, mainly due to language barriers. APAC countries, including China and Japan, are major sources of such content, making it difficult for moderators who may not be fluent in the languages used, so they face difficulty identifying and detecting abuses. Automated audio-based detection mechanisms also fall short because, like human moderators, they are not trained on the vast linguistic diversity involved.

While emerging technologies offer some hope in detecting non-graphic abuses, the larger issue lies in the lack of awareness among Trust & Safety teams. Without specific intelligence about the types of abuses occurring on their platforms and the particular threat actors producing them, teams struggle to effectively detect, thwart, and remove malicious content and accounts.

However, ActiveFence’s intelligence shows that detecting audio-based CSAM can be easier than previously considered. This content is typically produced and distributed by a small group of repeat offenders with distinct trademarks and characteristics. Much like legitimate music producers, these abusers flaunt their unique names within their audio clips – which makes it easy to train detectors to automatically identify tracks that contain these indicators.

Actions Platforms Can Take

To combat non-graphic CSAM, online platforms must adopt proactive measures, precise intelligence, and effective strategies. 

Here are a few actionable tips to mitigate risks and prevent harm on UGC platforms:

Cross-Platform Research: One of the most essential steps is Identifying threat actors who operate across multiple platforms. For example, a predator might share non-graphic CSAM on a public social media platform and then redirect users to a private messaging app where they distribute graphic CSAM. By tracking CSAM violations across platforms, you can preemptively prevent risks and block them from migrating to your platform. Tracking predators at their source provides valuable insights into their behavioral patterns, enabling platforms to better detect these threat actors before they exploit their services.

Lead Investigations: While most CSAM material is automatically removed from platforms, conducting investigations on items and users removed for child safety violations is important. This allows you to monitor evolving tactics, techniques, and terminologies used by bad actors. Understanding these patterns enables platforms to prevent CSAM more effectively and stay ahead of predators’ constantly evolving strategies.

Product Flexibility: To detect, moderate, and remove CSAM at scale, use advanced tools and products like ActiveOS or ActiveScore. Building your platform guided by safety-by-design principles, prioritizing user safety from the outset and throughout all product development stages, ensures that safety measures are ingrained in the core of the platform. Remaining agile in adapting new technologies and incorporating features to improve detection and removal efficiency is also vital to staying ahead of offenders.

User Accountability: Documenting abuses and sharing them in a knowledge-sharing system is a proactive step in preventing threat actors from operating across platforms. Banning users and removing their content often doesn’t stop threat actors, as they return with new accounts or migrate to other platforms. By cooperating with local law enforcement and sharing evidence, platforms can help catch offenders, ensuring reliable deterrence and eradicating online child safety violations.

Effectively solving a complex and nuanced issue like non-graphic CSAM demands a deep understanding of the trends, pervasiveness, and tactics employed by bad actors. Safeguarding the most vulnerable users is a challenging task, one that requires precise intelligence and proactive measures. As such, partnering with experienced subject-matter experts can provide valuable assistance in effectively addressing these challenges.

 

Editor’s Note: The article was originally published on November 29, 2022. It has been updated with new information and edited for clarity.

Table of Contents

Want to proactively prevent all forms of CSAM and Child Safety Violations from your platform?

Talk to our experts today