Improve your detection and simplify moderation - in one AI-powered platform.
Stay ahead of novel risks and bad actors with proactive, on-demand insights.
Proactively stop safety gaps to produce safe, reliable, and compliant models.
Deploy generative AI in a safe and scalable way with active safety guardrails.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Stay ahead of industry news in our exclusive T&S community.
Keep your players safe without ruining the fun
Gaming has changed a lot over the past 40 years, evolving from small in-person activities to massive online events that drive new technology but also open the door to both online and in-person abuse. The challenge for players, developers, and community managers has been to keep Trust and Safety intact without degrading the in-game experience.
By understanding the past will, we can be better equipped to handle present and future threats. Let’s run back the clock and take a look at how safety in gaming has evolved over time.
Before the 1990s, video games were mostly played offline, either as tabletop games, console games, or text-based computer games. Players typically gathered in the same room, with games involving just a few people. Tabletop games relied on players’ imaginations for immersion. This changed as home-based video game technology advanced.
The rise of dial-up internet in the mid-90s changed the scene. As more homes got online, game developers began to enhance games, increase immersion, and allow players to connect from anywhere as long as they had a modem. Early online games like Multi-User Dungeon (MUD) games let groups of 2 to 20 players interact in real-time, combining gameplay and chat. These MUDs laid the foundation for larger online games like MMORPGs and FPS games that would become popular and expand competitive gaming in the 2000s.
In these early days of online gaming, there was no standardized rule for Trust and Safety. Safety was managed by players and in-game moderators, who were guided by basic social norms. As cheating and disruptive behavior became more common, these players and moderators would email developers of online games directly, which often distracted them from making updates and enhancements to games. By the end of the decade, more structured safety measures were introduced, including the hiring of dedicated community managers who took pressure off developers and oversaw and rule enforcement directly.
The 2000s saw the widespread adoption of broadband internet, which gave players faster, always-on connectivity compared to dial-up. This helped make MMORPGs, FPS games, and online console games more popular and led to the rise of digital platforms like Steam, which changed how games were sold and played.
The faster connection speeds of broadband enabled smoother multiplayer gaming with less latency, making large-scale player interactions in MMORPGs and FPS games possible. The always-on connection allowed persistent game worlds to function and encouraged more social gaming where players could easily connect with friends or form global communities. With larger files able to be downloaded or streamed, developers could add more complex features like voice chat and downloadable content. Broadband revolutionized online gaming and set the stage for gaming to shift from a niche hobby to a mainstream global phenomenon, with tens of millions of people playing at any given time.
However, this rapid rise in online gaming also led to more rule-breaking, harassment, hate speech, and other unwanted behaviors, prompting game studios to create better safety systems. In-game reporting tools allowed players to flag inappropriate actions, which were reviewed by dedicated moderation teams to make games safer and more enjoyable for players.
These in-house systems were costly and complex, leading many large companies to source moderation responsibilities to Business Processing Outsourcing (BPO) providers. While only large companies could afford this approach, it showed the increasing need for professional moderation services to manage safety at scale.
During the 2010s, the gaming industry experienced an explosion in gaming types and experiences, driven by faster speeds, easier accessibility, and social connectivity. Streaming and content creation platforms helped esports and professional gaming thrive, while cross-platform play opened the gates for friends to play the same game together on different systems (like consoles, PCs, or mobile devices).
Cloud gaming and advances in smartphone hardware, like Apple’s A-series chips, made gaming more accessible to people by reducing the need for powerful local hardware. The gaming industry also focused on building communities through social features and platforms where spectators could watch others play in real time, creating an ecosystem where games could grow based on their popularity with streamers and viewers.
As online gaming communities grew, companies started realizing the wider safety issues beyond cheating and name-calling, like bullying, child safety, and other harmful behaviors. As the threats became more sophisticated, companies began using more sophisticated content moderation tools, often powered by AI, to detect and manage bad behavior. Larger teams, like those at Steam and Xbox Live, developed more consistent moderation and enforcement systems across different games.
The growing popularity of large online multiplayer games with millions of players also led to the creation of Player Experience Teams. These teams focused on lowering churn by managing player communities, helping onboard new players, providing in-game support, handling complaints, and enforcing community guidelines to keep games safe.
Another leap forward in safety was the introduction of behavioral analysis to identify problematic players based on their patterns of behavior, not just actions like cheating or harassment. This allowed Player Experience Teams to target high-risk players and quickly respond, like issuing bans.
It was also at this point that moderation became a high-wire act and had to find a balance between safety and freedom of expression. For example, when punishing every disruptive player, teams discovered they hurt the overall experience by making the game feel too censored. So, they implemented solutions that reduced negative behavior without stifling creativity, like making in-game chat an opt-in feature, which aimed to protect users from violative behaviors without stopping it entirely. A simple adjustment like that improved the experience for many players.
Today, many gaming platforms have found that player experience and Trust and Safety are closely linked, but their scope is too broad for one team to handle. To address this, companies have brought on dedicated Trust and Safety Teams to create a secure environment while Player Experience Teams tackle customer support and engagement. Trust and Safety teams address harmful behaviors, enforce policies, manage risks like fraud and harassment, and ensure games meet legal and ethical standards. By specializing in Trust and Safety rather than overall player experience, these teams are better equipped to handle emerging threats.
As online gaming and technology will only continue to evolve, gaming companies must stay proactive to keep their communities safe without diminishing the player experience. Advanced technologies like AI and machine learning have revolutionized the detection of harmful behavior. AI systems can analyze vast amounts of content, recognizing issues before they’re reported, and allow teams to act quickly. By doing this in real time, these systems can flag offenders or take automated actions to mitigate harm, making Trust and Safety efforts more effective.
Explore how gaming has evolved over 40 years, balancing safety and player experience in the face of new content moderation challenges.
Philip Johnston
ActiveFence experts discuss key Trust and Safety trends for 2025, reflecting on 2024's challenges and offering insights on tackling emerging risks.