Discover 3 key automations to optimize your moderation efforts Read 3 Essential Automations for Smarter Moderation

hero

Protects the LGBTQ+ Community with Proactive Moderation Efforts

The Trevor Project sought proactive protection from harmful content. ActiveFence helped automate moderation, reducing reliance on user flags.

Group of diverse young people standing together in front of a rainbow flag, symbolizing LGBTQ+ support and inclusivity.
At a Glance The Trevor Project is the leading suicide prevention and crisis intervention organization for LGBTQ+ young people. With the rise in online harassment against the LQBTQ+ community, The Trevor Project is a critical place for young LGBTQ+ folks to have as a safe resource to reach out to and feel protected. With its goal to prevent LGBTQ+ youth suicide and self-harm, it was vital for The Trevor Project to implement Trust & Safety mechanisms to proactively moderate messages that could be toxic or harmful in order to increase the protection of their community. By moving to a proactive approach, they were able to quickly take action on harmful content and spend more time building a thriving community focused on peer-to-peer engagement.

Company Info

PROFILE
The Trevor Project is a nonprofit organization providing crisis intervention and suicide prevention services to LGBTQ+ youth.
INDUSTRY
Social Media

The Challenge

As a non-profit, The Trevor Project must effectively use its limited resources to make a big impact. Given the amount of content posted to TrevorSpace, the company’s peer-to-peer platform, on a daily basis, it's impossible to monitor every interaction happening on the platform.

Prior to ActiveFence, the team relied on manual moderation, historical systems, and user flags to catch content that violated their policies.

The team was looking for ways to address violative content in a faster manner. In the case of suicide and self-harm, it’s crucial to ensure they take action quickly to help a user access life-saving care in a timely manner.

So, when launching TrevorSpace, they understood the importance of implementing safety by design. Yet, after its launch the popularity of the site required a vendor to help them reduce reliance on user flags, and automate their content moderation efforts for specific abuse areas, in order to take action on harmful content in a more operationally efficient way.

Company Info

PROFILE
The Trevor Project is a nonprofit organization providing crisis intervention and suicide prevention services to LGBTQ+ youth.
INDUSTRY
Social Media

The Solution

In line with their mission to prevent LGBTQ+ youth suicide and self harm, The Trevor Project needed to find a content moderation vendor that could cover the violations that are critical for them, specifically, harassment & bullying, hate speech, child solicitation, suicide and self-harm. Not only was violation coverage important, but the quality of the models was crucial. In an effort to reduce undetected content, they turned to ActiveScore, ActiveFence’s contextual AI automated detection capabilities to solve this challenge.

To strike a balance between providing a safe space for the community while allowing the necessary freedom for community expansion, The Trevor Project needed a partner to implement their warnings and penalties guidelines quickly and effectively on TrevorSpace. When a user on TrevorSpace violates a guideline and our moderation team becomes aware of it, users will be issued warning points. TrevorSpace leverages ActiveOS codeless workflows to automatically implement these policies. For example, anyone with 0-5 points will automatically receive a warning, anyone with 6-7 points will receive a two-week suspension, and those with over 8 points will be permanently banned. They also use ActiveOS’s moderation queue management to manually moderate community messages with greater efficiency.

The Impact

By using ActiveFence, The Trevor Project is able to ensure greater protection against the most egregious harms facing their community on the TrevorSpace platform. This includes customizing ActiveScore hate speech models to identify relevant keyword lists that would remove words commonly used among the LGBTQ+ youth community, aligning it to their policy.

By incorporating a proactive approach to moderation, they have moderated thousands of forums on the platform and ensured that their users have a safe space to discuss the issues that matter most to them.

Tommy Marzella - Trevor Project

“As the leader of our peer-to-peer networks, it’s crucial to ensure that we are creating a safe online space for all LGBTQ+ young people. With ActiveFence, we’ve found a partner from day one to help safeguard our community, while strengthening our real-time moderation efforts. ”

Tommy Marzella

VP, Social Platform Development & Safety

Explore strategies to combat anti-LGBTQ+ hate speech during Pride Month and beyond.

Read Now

Related Case Studies

Person interacting with smartphone surrounded by digital icons and social media symbols.
CASE STUDY

Social Media

Find out how ActiveFence helps major social media platforms fight harmful content and maintain community trust.

Read More
cohere_banner
CASE STUDY

Cohere

Learn how Cohere partnered with ActiveFence to enhance trust & safety across their platform.

Read More
Smiling couple learning online together on a laptop at home.
CASE STUDY

Udemy

Discover how Udemy uses ActiveFence’s solutions to safeguard learners and educators worldwide.

Read More