Improve your detection and simplify moderation - in one AI-powered platform.
Stay ahead of novel risks and bad actors with proactive, on-demand insights.
Proactively stop safety gaps to produce safe, reliable, and compliant models.
Deploy generative AI in a safe and scalable way with active safety guardrails.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Stay ahead of industry news in our exclusive T&S community.
ActiveFence’s Human Exploitation Lead Researcher Maya Lahav examines a rising trend in online behavior involving victims of sickness, poverty, or war who are coerced into recording and soliciting donations. This harmful trend exploits some of the most vulnerable in the global community, monetizing the suffering of those who cannot legally consent.
Year after year, people increasingly opt to donate online to charitable causes. Crowdfunding and social media platforms with built-in fundraising features have helped facilitate this shift in philanthropic giving. Alongside this positive trend in online behavior has developed a coercive pattern where victims of sickness, poverty, or war are recorded and used to solicit donations without their capacity to give consent.
Consent is a fundamental stress test that must be used to evaluate the nature of online behaviors.
For example, while adult pornography is generally legal across the world, and often permissible on online platforms, the same type of recordings that were created without the featured person’s knowledge, are considered wholly separate. These are classified as non-consensual intimate imagery (NCII), not only are they not permitted on platforms, but they are also illegal.
In the context of requests for donations, a person may offer their agreement to be used to solicit funds. However, when that choice is taken away from them, either because they are too young to consent, are too sick, or are in distress, the content is classified as human exploitation. This exploitative content is often, but not exclusively, used by threat actors seeking to monetize suffering and generate profits online.
Threat actors are leveraging the plight of vulnerable individuals, families, and even communities. They use photographs and video recordings of at-risk people to solicit donations from which they profit. To increase revenues, threat actors generate emotive content that exploits the suffering of sick or malnourished children or vulnerable adults at risk. This content is disseminated online and across social media, with requests for money.
The subjects of this material often cannot offer consent and have no control of the funds that are donated. In many cases, these at-risk individuals will not receive the funds or will only gain a small amount of the charitable donations solicited by the activity. This is despite the threat actors frequently posing as regulated charitable organizations or private charitable fundraisers.
This coercive cyberbegging (sometimes called e-panhandling) impacts many platforms, including social media, website hosting, crowdfunding, and payment processing services. It presents a distinct set of online behaviors. Awareness of which is essential for moderators seeking to detect harmful on-platform chatter and its related activity.
Geopolitical events catalyze coercive cyberbegging activity, with accounts demonstrating the extreme economic need of those living in refugee camps and the devastating impact of natural disasters such as floods or earthquakes.
Accounts on live stream platforms, or those with livestream features, showcase children and vulnerable adults with severe illnesses, handicaps, and those living in dire conditions. They share footage of at-risk persons coerced into begging for hours, or exploitatively show them in distress to convince viewers to donate. It is claimed that the funds collected will help alleviate severe financial needs or life-threatening medical conditions. Other threat actor accounts amplify the initial recording by re-posting the content or directing followers to watch the material in evergreen posts.
A significant portion of coercive cyberbegging exploits at-risk people and is also fraudulent. It is, therefore, key to distinguish between those accounts fundraising with good intentions and those working under false pretenses. There is a pattern of threat actors claiming that NGOs and other registered charitable organizations operate their accounts. Therefore, an important indicator to counter coercive begging is to check that:
Trust & Safety platforms should monitor circumvention techniques, which may signal coordinated network activity. Cross-platform activity with similarly named accounts and parallel content also points to coordinated fraudulent operations, even in cases where the content is shared from individual accounts. Primary accounts can be a gateway to multiple off-platform payment systems, including links to bank account information, fundraising websites, and digital payment platforms. By tracking this cross-platform activity, trust & safety teams can effectively detect this harmful content, and ensure that their platforms are not misused for harm.
Understanding that this exploitative activity is present on major tech platforms is the first step in countering it.
As Trust & Safety teams look for identifiable patterns of intentionally deceptive behavior, some activity used to amplify the content’s reach indicates a direct nexus to cyberbegging. Cataloging these can be used to detect future emerging examples of this damaging activity.
Signifiers include appeals for donations to broad fundraising causes, such as helping “poor children in Africa,” where requests for donations are linked to broad pleas to “help children stay alive.” Relevant hashtags may also include:
Coercive cyberbegging has become increasingly prevalent, given the potential reach and threat actors’ ability to evade detection.
At its core is the exploitation of some of the most vulnerable in the global community. It is monetizing the suffering of those who cannot legally consent. Trust & Safety teams should be aware of the intrinsically fraudulent and exploitative practices that pose a risk to their platforms and communities. Conducting deep threat intelligence to track and analyze the activity of these communities is essential for platforms to strengthen detection and moderation and enhance mitigation capabilities.
Want to learn more about the threats facing your platform? Find out how new trends in misinformation, hate speech, terrorism, child abuse, and human exploitation are shaping the Trust & Safety industry this year, and what your platform can do to ensure online safety.
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.