Improve your detection and simplify moderation - in one AI-powered platform.
Stay ahead of novel risks and bad actors with proactive, on-demand insights.
Proactively stop safety gaps to produce safe, reliable, and compliant models.
Deploy generative AI in a safe and scalable way with active safety guardrails.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Stay ahead of industry news in our exclusive T&S community.
By 2021 the online population had reached 4.66 billion—over 60% of the world’s inhabitants. This growth of the internet is matched by an escalation of dangerous activities online. At the same time as these platforms are developing new services, legislators are tightening requirements to tackle online abuse, leaving Trust and Safety professionals caught in a perfect storm. To thrive in 2022 and beyond, platforms must take steps to proactively identify and combat the emerging threats, which target their users.
In recent years the real-world impact of online activity has become ever-more pronounced. We have become accustomed to self-radicalized ‘lone wolves’ committing acts of terror with deadly force. These violent events form part of a dangerous accelerationist feedback loop. Motivated by extremist content, these attacks are recorded and then shared online to inspire future incidents.
It is not just ethnic or religious violence that can be traced to online activity. Online child predator communities are growing.
Trust in the mainstream news media has also been damaged, with coordinated dishonest sources proliferating online.
Users are being challenged by disinformation across the globe, unbounded by language or geography. This activity is most pronounced at times of general elections and has been severe during the COVID-19 Pandemic. These false narratives fan the flames of societal divisions and cause dramatic destabilization of democracies across the world.Â
These serious threats are converging at a moment of significant technological innovation.
Not only can private individuals now broadcast using social media platforms, but utilizing the architecture built for online gaming, they can now simulcast across platforms to huge audiences. These innovations and the ever intertwining of platforms facilitate the interaction with larger audiences – with reduced friction. However, it also multiplies the opportunities for abuse, with repercussions for child endangerment, racial and religious extremism, and the spread of disinformation. The movement towards the metaverse expands the potential reach of harmful content and broadens the burden of liability from harm.
Key questions for online safety are raised by this rapid interconnection of platforms – an important question is liability:
If a criminal act is organized on a gaming platform and the gameplay is then simultaneously broadcast across multiple, independent streaming platforms, whose responsibility is it? Â
A cross-platform approach to threat detection is the only viable solution to ensure platform integrity.
These questions are more important because the internet rules established twenty-five years ago are rapidly being replaced. Section 230’s status quo is receding into history.
National legislators are taking steps to set new international internet standards, and responsibility for hosted content is shifting from the content creator to the platform.
These new laws will have consequences for online anonymity, freedom of speech and the freedom to be protected from harm.
The UK is leading the charge creating the first duty of care for online safety and is expected to pass a new law requiring platforms to find and remove new child pornography and terrorist content, as well as remove other types of harmful content such as hate speech. Canada is following suit and the EU is considering similar requirements.Â
There are few online borders in user-generated content, and while regulatory innovations are occurring abroad, US companies will need to comply if they wish to access foreign markets. Proactive harmful content detection, therefore, looks to become the international expectation. This means detecting harm off-platform to protect users within.
As the explosion of user growth and user abuse continues and legal obligations intensify, platforms must become more agile in handling the emerging threats.Â
2022 is heralded as the start of the Age of Accountability. It looks to be defined as a year of legal revolution that will cement a proactive international baseline for online safety. Trust and Safety teams must adapt quickly as the online ecosystem changes, and overlapping platform use creates multi-platform vulnerabilities.
ActiveFence works with leading platforms to help stay ahead of threats and be in compliance with legal obligations.
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.