Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Your guide on what to build and buy
Digital first responders are our guardians of the internet, exposing themselves to harmful content to keep us safe. In return, we must ensure their safety. Viewing content daily, moderators are at risk of mental health symptoms that significantly effect their wellbeing. To address this, we provide actionable solutions for trust and safety teams to implement and build resilient, healthy work environments.
As the field of mental health advances, companies are beginning to recognize the effects work has on the health of their employees. With this positive step for society, creating healthy work environments is becoming a priority. Within this shift, it is clear that mental wellbeing of segments of our workforce should be of the utmost importance. Those who protect our society must receive the support needed to do their job effectively while remaining healthy. Digital first responders are exposed daily to horrific content, in order to protect citizens of the internet. Their noble work makes the internet a safer place. In return, we must ensure that these responders are safe as well.
A New York Times article titled, “We forgot about the most important job on the internet,” reminds us that content moderators are “essential gatekeepers, but also our greeters, paramedics, law enforcers, teachers and curators.” As ”one of the most crucial jobs created by the internet economy,” we’ve failed to prioritize their health.
In this blog, we highlight the dangers content moderators face and provide practical solutions to these challenges. We review the foundations that trust and safety management needs to build resilience programs and protect the wellbeing of digital first responders.
While it’s clear that exposure to harmful content may impact the exposed, the extent is greater than one might expect. Research today highlights that prolonged exposure to specific content without workspace support can significantly impair health and cause long lasting mental health symptoms. A 2014 study found that viewing coverage of the Boston Marathon bombing for six or more hours a day caused more stress than actually being present at the attack.
With tough performance appraisals based on accuracy and judgment calls, psychological effects are exacerbated. PTSD, anxiety, and depression are just a few of the side effects. Vicarious trauma, a form of PTSD, causes those working with victims of trauma to experience significant psychological distress. This phenomenon is seen with psychologists and now, it is increasingly seen with content moderators.
Companies are affected as well. When employees are unwell, their work suffers and productivity decreases. Furthermore, platforms may even face liability if they do not take measures to protect their employees.
Workplace wellness programs must be implemented to address the needs of those exposed to harmful content and the stressors of performance.
Mitigating risk where it can be minimized can be challenging to practically implement. Here, we suggest tools and interventions that can assist in both proactively sheltering responders to harmful content and reactively providing support.
Trust and safety teams should turn to what they know best- technology. Shared databases of harmful content provide artificial intelligence tools to immediately match and recognize harmful content, eliminating the need for a moderator to review it. Repeated, “recycled” moderation decisions removes the need for another human interaction.
Additionally, altering the images moderators are exposed to can largely mitigate the risk caused by graphic images. Many studies have shown that different types of images elicit different emotional responses. In a study of 40 university students, it was found that when size reduction and image blur were applied to photos, negative emotional responses decreased significantly. According to participants, photos were less troubling and more neutral than the original, full sized clear image. Additionally, image color has been found to influence responses. Participants in another study responded more negatively and were more aroused from colored images, as greyscale versions of the same image elicited milder responses. Reports also found that despite alterations to color, clarity, and size, neither accuracy nor speed was compromised.
Resilience is a term that is often used in the context of positive psychology. Generally, it refers to the ability to “ bounce back” from the inevitable challenges of life. However, when it comes to resilience in the workplace, especially in an environment with challenging work, it takes on a different meaning, as each workplace presents a different range of stressors.
Trust and safety teams should provide ongoing resilience training to manage the continuous stressors of digital first responders. These foundational skills include:
Skill building exercises should be incorporated in onboarding training in addition to periodic sessions. Not only do these trainings develop personal resilience, but they also create a supportive work environment, enabling content moderators to succeed. Ongoing resilience training has been proven to increase employee wellbeing and improve performance and productivity. This is especially important in high stress environments where performance levels are constantly evaluated.
“Distancing from drama” is a crucial element to employee wellbeing, tying intervention practices together. Downtime has been proven to help moderators cope. Organizations have implemented downtime for employees, with some breaks incorporating activities during breaks. Team members can request these activities, such as painting, museum visits and walking in the park. These breaks are key to creating space away from the content they are viewing.
Another important tactic to “distance from drama” is mindfulness. Mindfulness centers attention and brings awareness to the present moment. Tools such as guided breathing exercises throughout the day can help moderators feel safe and direct their minds to the present. Mindfulness can decrease employee stress, build resilience, and improve vigor, all to enhance overall wellbeing.
As mentioned above, workplaces that emphasize resilience and mental health inherently create a supportive environment. When care is incorporated into organizational structures, intervention programs are more effective. Team leaders, managers, and team members should foster supportive environments, offering social outlets to grapple with challenging work.
Sometimes, additional support is needed as a reactionary intervention. Many organizations offer on-site counseling support where employees can cope with both work related and personal related stressors. Therapy at the workplace has been shown to reduce anxiety, stress, and depression for employees. In severe cases, additional intervention is needed. When symptoms meet the criteria for psychological disorders, psychotherapy focused on recovery may be necessary. This reactionary care supplements workplace interventions.
As we have learned, digital first responders face real challenges in the workplace that can significantly impair their mental health. Symptoms can be severe, including the onset of illnesses such as depression, anxiety and PTSD. Trust and safety teams must implement interventions to not only protect their employees but protect their companies from liability.
Interventions that reduce exposure, including AI to reduce harmful content and tools to alter images, build resilience, bring employees to the present, and provide clinical support foster supportive environments.
ActiveFence works with trust and safety teams, digital first responders, and recruitment teams to ensure that work environments are safe. CleanView, an ActiveFence tool, prioritizes wellness and improves mindfulness with image alteration tools and builds quiet into employee’s workdays with guided breathing exercises.
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.