Improve your detection and simplify moderation - in one AI-powered platform.
Stay ahead of novel risks and bad actors with proactive, on-demand insights.
Proactively stop safety gaps to produce safe, reliable, and compliant models.
Deploy generative AI in a safe and scalable way with active safety guardrails.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Stay ahead of industry news in our exclusive T&S community.
In honor of Safer Internet Day, ActiveFence shares five ways platforms can support the wellbeing of children online.
We can all agree that protecting children online is, or should be, a top priority for companies. Today, children learn to use a touch screen before they can even read what’s on its screen. With the internet becoming a regular part of children’s daily lives, threats to their safety and security are more present than ever. From bullying and harassment to grooming by predators and promotion of self harm, children are at risk on the internet.
To help platforms protect children, we’ve outlined five steps companies can take to keep their platforms safe.Â
1. Understanding platform-specific threats
Knowing that children are exposed to dangers online is not enough. Companies must gain in depth knowledge of any harms that can take place on their platforms. To do so, companies should identify platform-specific dangers, which will help define areas of focus. These shape the actions, preventative measures, and policies a company must enact.. Threats include the spread of CSAM, child abuse and sexual exploitation, self-harm and eating disorders, and online bullying and harassment.
For example, ActiveFence has found that live-streaming and chat platforms may face threats of the presence of CSAM and child exploitation. These platforms can expose children to contact with child predators who may pressure them into performing inappropriate acts or share private and sensitive information. According to ActiveFence’s recent research, communities of hidden predators are coordinating activities to weaponize live-streaming and live-chat platforms.Â
Read our report to understand how communities of hidden predators coordinate activities to identify minors, pressure them to act inappropriately, and record interactions to be shared within networks of predators.
2. Safety by design
When developing your platform, safety by design should be any company’s approach, ensuring that user safety is a top priority. Companies should keep children, their capabilities, and maturity levels in mind with each design decision.Â
Reporting mechanisms and AI detection are built-in tools that can combat the challenges of online safety for children. However, features that directly interact with users can be designed within apps as well. For example, some companies utilize machine learning to build tools that can alert children and parents when harmful messages are sent or received. Additionally, in-app information can be provided when children come across unsafe content or encounters. Users searching for harmful terms, such as eating disorders, can be directed to expert resources like local hotlines or body positivity resources.. Furthermore, family links can help parents manage children’s accounts with the features that can limit screen time and exposure to specified content.Â
3. Child safety policies
Comprehensive policy for what is unacceptable behavior on a platform must be created. This not only makes it clear what is and is not tolerated, but also signals to abusers that your platform takes safety seriously. Each sector should create differing policies that best reflect their users-base, services and needs. For example, the policies of social media platforms and instant messaging applications will likely look different.
Building effective policy can be tricky. Policies should both be explicit and direct but non-exhaustive so as systems can respond to new and evolving threats. For an in-depth review of the policy of over 20 user generated content platforms, please reference our Child Safety Policy Guide.Â
4. Proactive detection on and off-platform
Keeping children safe from CSAM, hate speech, harmful influences and more, requires platforms to take a proactive approach to content detection both on and off platforms. While monitoring and moderation of on platform activity is crucial, off-platform intelligence is needed as well. To prevent detection, predators are now organizing off-platform activity in private messaging groups or forums on both the open and dark web. To combat these threats, off-platform intelligence sourcing to identify threats, harmful users, nefarious activity and new TTPSs, is necessary as well.Â
5. Reporting findings
Illegal, harmful content must be handled immediately. From the presence of CSAM to information on missing children, it’s essential that platforms not only remove harmful content from their platform, but report their findings as well. We recommend that companies form relationships with law enforcement agencies and NGOs that deal with issues of child safety.Â
The following organizations focus their efforts entirely on the reporting of child-related activities online:
The internet has become an integral part of our and our children’s lives today. Simultaneously, the haven that the internet is for criminal activity has now expanded to reach children. With rising threats online and offline, platforms must make the safety of children a top priority. From product design, crafting effective policy and employing proper technology for detection, platforms today can overcome the challenge of keeping children safe.
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.