Improve your detection and simplify moderation - in one AI-powered platform.
Stay ahead of novel risks and bad actors with proactive, on-demand insights.
Proactively stop safety gaps to produce safe, reliable, and compliant models.
Deploy generative AI in a safe and scalable way with active safety guardrails.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Stay ahead of industry news in our exclusive T&S community.
When platforms build a product with Safety by Design as a guiding principle, Trust & Safety teams can better protect users from the start. In this article, we share seven features that teams should consider incorporating when designing their platforms.
Safety by design is the principle that prioritizes building a product with safety at its center. The goal of this principle is to prevent harm before it occurs, rather than implement remedies after the fact. In the Trust & Safety industry, safety by design should be a guiding principle from the start of a platform’s creation. By putting safety at the forefront of product decisions, Trust & Safety teams will reap the benefits in the long term, keeping users safe, and ultimately, making teams’ jobs easier.
In this blog, we’ll review the main principles of safety by design, as well as provide seven features that can be easily implemented into a platform’s design to make it safer.
In practice, safety by design means that product development should have a human-centric approach. According to the eSafety commissioner, an Australian regulatory agency for online safety, safety by design must be embedded into the culture and ethos of a business. Stressing practical and actionable methods, the eSafety commissioner believes that safety by design can be achieved for all platforms of all sizes and stages of maturity.
Here are the three fundamental principles that makeup safety by design:
The burden of safety is on the service provider, and not on the user.
The dignity of users is the most important. In practice, this means that a product should serve the user, putting their interest first.
The way to achieve safety is with transparency and accountability.
With this understanding of safety by design, we’ll dive into seven features your platform can implement to ensure the safety of your users.
With the following product features, platforms can build a safe platform from the start.
Age verification mechanisms can ensure that only those who are old enough can gain access to your platform or service. An example of an age verification process is a form where a user enters their name and date of birth and uploads an identifying document. Generally, this feature is implemented during the sale or sign-up process of a platform.
The ability to identify children allows a platform to implement protections for users. For example, a platform can limit access to specific features and show only age-appropriate content to users. An additional feature to protect young users is granting parental control of a service. This feature will help companies meet legislation requirements, such as the United Kingdom’s upcoming Online Safety Bill.
A mechanism where users can report abuses is crucial to every platform. The following questions should be asked when assessing this mechanism:
On platforms with user-generated content, content moderation tools can be implemented to stop abuses. Threats such as CSAM, illegal goods, or terrorist content can either be removed automatically or flagged for human review with harmful content detection.
Basic tools can allow a user to restrict interactions with another user. Blocking, muting and limited or restricted viewing lets users to decide who and how they want to interact with another user.
With the right features, exposure to harmful content created by problematic users can be swiftly dealt with. Platforms should be able to hide specific, or all content pieces generated by malicious users. By internally flagging or labeling content can temporarily limit exposure or permanently delete it. For more minor cases or grey areas, visibility or discoverability can be reduced.
Going a step further, platforms should have a mechanism that can prevent new harmful content from being shared. For example, platforms should be able to block ongoing abusers from logging in.
Effective, comprehensive, and exhaustive policies must be in place for Trust & Safety teams to take action. Called community guidelines, terms of use, or policies, among other terms, guidelines allow teams to take action against abuses. For guidance on building platform policies, read our Trust & Safety Policy review.
Consensual software allows for a user to explicitly say “yes” to interact with a platform. User interactions with platforms can be in its UX, software engineering, and data storage. Throughout the platform, information should be provided in order for users to make an educated decision if they do or do not want to opt-in to features, activities, or data sharing. Default settings can be built with a “bias” towards privacy, as well as asking permission before interacting with anything potentially harmful.
As we’ve learned, technology companies have the responsibility to protect users and build features within their platforms to enforce their protection. With these simple features, platforms can create safer online spaces by giving users more control, implementing preventative measures, and ensuring that proper responses to abuse are in place.
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.