Improve your detection and simplify moderation - in one AI-powered platform.
Stay ahead of novel risks and bad actors with proactive, on-demand insights.
Proactively stop safety gaps to produce safe, reliable, and compliant models.
Deploy generative AI in a safe and scalable way with active safety guardrails.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Stay ahead of industry news in our exclusive T&S community.
For Trust & Safety teams at website building platforms, maintaining online safety means protecting freedom of speech while navigating a complex landscape of malicious activities and shifting global regulations.
As one of the internet’s core infrastructure services, a key value for website building and hosting platforms is freedom of expression and access to information. For the Trust & Safety teams of these platforms, this often means taking a conservative approach to Trust & Safety – taking action only when undoubtedly necessary. This preferred course of action, however, is being challenged by complexities in the online landscape and changes to the legal frameworks in which the internet operates.
With roughly 1.9 billion websites currently online, and about 252 thousand new ones created daily around the world – Trust & Safety teams at website building platforms are tasked with the responsibility of not only processing huge volumes of data but also understanding and moderating violations across languages and geographies.
To handle the high volume of items to be moderated, Trust & Safety teams are reliant on NLP (natural language processing) – a type of AI which helps computers make sense of human language. While allowing detection at scale, this approach is limited in that most NLP is built with English-language learning sets – creating significant gaps in non-English detection capabilities. While English is currently the language of 60.4% of the world’s top 10M websites, Trust & Safety teams are ill-equipped to make nuanced moderation decisions in the other 40% of websites, where they have little to no detection tools available.
A multi-pronged approach that involves automated detection technologies and human experts can provide more holistic coverage of global risks.
Traditionally protected by US Section 230, the legal landscape for Trust & Safety teams is changing. Governments around the world are putting in place laws that carry heavy fines and executive liability for the spread of harmful content online. This accountability will now apply to where content is accessed – regardless of where it was created or where the hosting company’s offices are located.
Among these laws, Germany’s 2017 NetzDG Network Law, mandates that internet companies remove “manifestly illicit” content within 24-hours of a user reporting it. Similarly, the EU’s Digital Services Act (DSA) will soon require platforms to remove terrorist content within 60 minutes of official notice – similar to Austria’s Communication Platform Act, which is already in place. France’s Respect The Principles of the Republic Act similarly requires website hosting platforms to take swift action against reported items, while also drafting legal reports on each of these removed items. This requirement creates a significant financial burden for being reactive, and not proactive about harmful content removal. Finally, the UK’s Online Safety Bill, recently introduced to Parliament, will require user-to-user platforms to proactively remove illegal, and some legal but harmful content. Applying to all platforms which host user-generated content or allow users to interact online – the Online Safety Bill will likely apply to website building and hosting platforms, requiring them to take action.
Carrying heavy fines and criminal liability, these laws mean a dramatic shift for Trust & Safety teams. Platforms that have traditionally taken a more conservative and reactive approach will soon be required to proactively assess the level of harm posed by a website.
As an infrastructure service, website building platforms generally take a neutral approach to content moderation – supporting freedom of expression whenever possible.
It’s easy to see why infrastructure companies take this approach to harmful content: these platforms determine whether or not an individual or group gets to exist online. Matthew Prince, the CEO of Cloudflare, another internet infrastructure company, said “no one should have that power.” In the same interview, Prince explained that action by infrastructure companies can be used as a precedent for massive moderation and censorship – and should therefore be backed by strong policies.
This approach, however, can be challenged. In the case of Cloudflare, the company made the complex decision to ban 8chan from its services following its involvement in multiple mass murders. As the online world changes and the legal environment demands proactivity – the Trust & Safety teams at website building and hosting platforms must also change their outlook, adopting a more proactive approach to maintaining online safety.
The demands of today’s online world require the Trust & Safety teams of website building platforms to shift their mindset. In order to be compliant while also not losing sight of their core values, teams must be sure that they are making the right decision, while also being proactive. In order to do so, teams must:
Clearly, the online safety landscape is changing, testing Trust & Safety with an influx of multi-language malicious activity with potentially devastating real-world results, complex ethical considerations, and legal ramifications. Website building and hosting platforms must institute proactive measures to maintain the safety of the web and the integrity of services.
ActiveFence has investigated the harms impacting website building platforms, providing platforms with actionable intelligence of malicious activity. Access the report below, and learn how to stay ahead of looming threats.
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.