Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Your guide on what to build and buy
The UK’s much-awaited Online Safety Bill has received Royal Assent and is now law. Intended to “make the UK the safest place to go online,” the bill has been through a very long iterative process since its initial introduction in April 2019. Among the most controversial online safety regulations passed in recent years, the bill seeks to ensure online safety for users in the UK, but its application may pose significant problems for tech platforms.
The legal team at ActiveFence has reviewed the bill and we share what we believe are its most crucial components below.
Ofcom, the United Kingdom’s communications regulator, will become the policing body for the Online Safety Bill when it becomes law. As part of the office’s duty to oversee compliance with this law, Ofcom will draft initial guidance and codes of practice on numerous aspects covered by the bill. This guidance must be published within the first 18 months after it passes and Ofcom also expects to publish its codes of practice soon after the commencement of the bill, possibly within the first 2 months.
Broadly speaking, the Online Safety Bill will apply to three categories of services with services offered in the UK.
The Online Safety Bill represents a seismic shift in the approach to the regulation of online platforms in the UK, by imposing such specific duties on platforms that host user-generated content (UGC). Online services hosting multiple services may also need to ascertain which parts of their businesses apply to the law. For instance, a platform offering one-to-one messaging and voice calling may have requirements placed on messaging, while voice calls will remain outside the law’s scope.
Last year, another piece of platform regulation was also implemented in the EU in the form of the Digital Services Act (DSA). Whilst there are some parallels to be drawn, the DSA focuses predominantly on transparency of moderation, risk assessments, and compliance processes in relation to illegal content. By contrast, the Online Safety Bill focuses on the measures that platforms have in place to tackle not only illegal content but also content that is harmful to children.
The law states that platforms will have a “duty of care” to keep their users safe, but what this means in terms of specific obligations will depend on the size and capacity of the platform in question and the likelihood of harmful content being shared on it. In fulfilling this “duty of care,” Ofcom will likely be expecting platforms to take steps including proactive monitoring for online harm (especially for high-risk platforms), tools that allow users to control the type of content they access, and effective notice and takedown systems. Additionally, platforms will need to consider how their own algorithms and design exacerbate the scope of harm.
In addition to its firm protections for children, the bill empowers adults to take control of what they see online. It provides three layers of protection for internet users which will:
Initially, online services will need to conduct at least one to potentially three detailed “risk assessments.” These assessments are the following:
All in-scope services will need to assess the risks of harm to users that could arise as a result of illegal content on the platform, including how quickly and widely illegal content could be disseminated using algorithms. This risk assessment must take into account a number of factors including the platform user base, functionalities of the service, the different ways that the service is used, and the risk of the service being used for the commission or facilitation of a serious criminal offense. The risk assessment must also consider how the design of the service helps to mitigate or reduce any identified risks and promote media literacy.
All in-scope services will need to carry out a specific risk assessment if their service (or part of the service) is likely to be accessed by children. In order to determine if the service is likely to be accessed by children, platforms must first undertake a children’s access assessment. The purpose of the access assessment is to ascertain whether the service is likely to be accessed by or appeal to, a significant number of users who are children. Platforms will only be able to conclude that the service is not accessed by children if they can demonstrate that they are successfully using age verification or estimation technologies to prevent this.
Like the illegal content risk assessment, the children’s risk assessment must take into account a number of factors including the number of children who use the service (and their different age groups), the level of risk that children have of encountering certain types of harmful (not just illegal) content on the platform, the risks these categories of content could pose to children of different age groups and characteristics. The risk assessment must also take account of the way the service is used and designed, including how it could facilitate the dissemination of content that is harmful to children. Platforms must also consider how the design of the services helps to mitigate or reduce any identified risks and promotes media literacy.
Once risk assessments and policies are in place, platforms will be legally required to uphold those policies and report on these activities. Platforms will also be required to carry out further risk assessments before making any significant changes to the design or operation of the service.
In response to these risks, platforms must create policies and implement measures to counter them. For instance, platforms will need to provide the means for users to easily report illegal content or content that is harmful to children where applicable. They will also need to provide an easy-to-use and transparent complaints procedure and keep accurate records of the risk assessments they have undertaken in relation to illegal content and the risk to children. Platforms also have a duty to enforce their terms of service and ensure that they comply with them when taking down UGC, restricting user access to content, and suspending or banning users who do not comply. If a platform is potentially risky to children, it will also need to define the specific actions it will take to mitigate that risk.
In addition to the core duties of care, there are several other requirements that platforms may need to abide by. Requirements include the necessity to report child sexual exploitation content to the National Crime Agency and duties on larger platforms to tackle fraudulent advertising and produce transparency reports. The bill also introduces a number of balancing measures, which oblige all regulated services to have “particular regard” for freedom of expression when implementing safety measures. Larger platforms also have specific duties to assess the impact of their measures on freedom of expression and privacy rights, to protect news and journalistic content that appears on the platform, and not to act against users other than in accordance with their terms of service.
The law will set up two main categories of content that platforms will be required to act on:
The law will require platforms to take proactive measures to protect users from encountering 13 different types of illegal content, all of which are already in existing legislation. These include:
In addition, platforms will need to take action against illegal content beyond the listed offenses after notification of the content’s existence.
Platforms accessible to children will be required to define risks to children that are legal but harmful. Implementation of proportionate measures to mitigate these risks and prevent children from accessing harmful content will also be necessary. The most damaging content for children, which platforms will need to take particular care to prevent, are set out in the bill. These include “primary priority content” (which includes pornography and content, which encourages self-harm, suicide, or eating disorders), “priority content that is harmful to children” (which includes abusive content that targets protected characteristics, is bullying, encourages or depicts violence, encourages high risk “challenges” or “stunts” and encourages taking harmful substances) and other non-designated content that presents a “material risk of significant harm” to a lot of children in the UK.
Ofcom intends to be a proactive regulator once the bill becomes law, and has already hired a significant task force to support these efforts. The commissioner also expects the number of online services impacted by this law to reach 25,000 or more.
In preparation for this endeavor, Ofcom expects to produce over 40 regulatory documents – including codes of practice and guidance for service providers – which will set the specific expectations and rules for platforms to follow. Monitoring these services at scale will require Ofcom to establish automated data collection and analysis systems, as well as advanced IT capabilities – adding up to an expected cost of £169m by 2025, with £56m already incurred by the end of 2023.
Fines for failure to comply with the law will be the greater of £18 million or 10% of a company’s global annual turnover – this can add up to billions of Pounds for a large online platform. Moreover, Ofcom will be able to seek court rulings to stop payment platforms and internet service providers from working with harmful sites. Additionally, the law will impose criminal liability for company executives such as senior managers and corporate officers who fail to cooperate with the law.
Over the next 18 months (or sooner), Ofcom will issue codes of practice and guidelines for online platforms, at that point, platforms will need to begin implementing new online safety mechanisms – as defined by law and described above. Platforms and executives will likely be held liable for lack of compliance.
While the laid-out timeline seems prolonged, it is critical to note that the process of implementing online safety mechanisms is complex and expensive. Platforms that haven’t already enlisted the help of dedicated technology and tools will find that by the time specific requirements are laid out – they may be too late.
Technology platforms should take proactive actions to keep users safe in preparation for the bill’s enactment. However, the codes of practice and guidance issued by Ofcom will form key planks of the online safety regime and its practical application once they have been published.
Ranging from managed intelligence services to content detection APIs, and a dedicated Trust & Safety platform, ActiveFence’s services allow platforms of all kinds to ensure the safety of their users and services. By providing proactive insights into online harms, before they impact users, we enable platforms to be legally compliant across geographies and languages. Moreover, by using ActiveOS, Trust & Safety leaders can quickly assess platform risks, establish policies, and ensure that content is quickly and efficiently handled by the right moderation team – limiting platform liability for harmful content. To learn more about how ActiveOS enables teams to remain compliant, click below.
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.