Improve your detection and simplify moderation - in one AI-powered platform.
Stay ahead of novel risks and bad actors with proactive, on-demand insights.
Proactively stop safety gaps to produce safe, reliable, and compliant models.
Deploy generative AI in a safe and scalable way with active safety guardrails.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Stay ahead of industry news in our exclusive T&S community.
Information warfare is playing a significant role in Russia’s invasion of Ukraine. Trust & Safety teams of all sizes must protect their platforms from being a tool of warfare. Here, we share seven information operations that Trust & Safety teams should be on the lookout for.
Warfare takes on many forms, both on the field and on the screen. Digital warfare activities are vast- from cyber activities such as hacking or doxxing to disseminating disinformation- these weapons of war are varied, complex, and nuanced.
ActiveFence is witnessing many forms of information operations in relation to the current conflict taking place in Ukraine. Not only are threat actors waging digital warfare, but these actors are also conducting online operations to support military warfare on the ground. Difficult to detect, these activities utilize many different channels across multiple platforms.
To help Trust & Safety teams keep their platforms safe from weaponization, we share the top seven information operations that ActiveFence has detected in Eastern Europe and beyond.
1. Recruitment
Calls enlisting military personnel are a frequent activity seen across multiple platforms. Combat soldiers, volunteers, and cyber experts are recruited to join in Russia’s efforts online or on the battlefield. Pro-Russian online groups are recruiting for specific units in the Russian army itself, as well as a private, Kremlin-backed military company that is linked to terrorist organizations. Calls for cyber experts skilled in hacking, pen-testing, spamming, and other cyber activities are recurring on platforms ranging from social media platforms to payment messaging platforms. Much of this online recruitment activity links to far-right units and organizations backed by pro-Kremlin online communities. Some Russian disinformation entities have also organized into a coalition to promote Russia’s image and spread disinformation and to that effect, have also publicized calls for volunteers with relevant film, social media, and other skillsets. Ukrainian actors are actively recruiting online as well. However, these are generally viewed as legitimate.
2. Fundraising
Funding messaging for the Russian military is surfacing across platforms. These operations provide methods of donation for military purposes, including credit cards, cryptocurrencies, and wire transfers, while providing bank account details. Known disinformation actors have been active in these efforts. At the same time, scammers are taking advantage of the situation by claiming to fund humanitarian efforts and armies on both sides of the conflict. Additionally, some threat actors have utilized platform ads to advertise their websites and organizations, seeking to funnel money through platforms’ payment abilities.
3. Doxxing
Doxxing, or publicizing identifiable information about an individual or organization, is currently being detected across multiple online channels. By sharing this private information, threat actors encourage harassment and even physical violence against Ukrainian soldiers, journalists, activists, and cybersecurity personnel. Data leaks contain specific names, social media accounts, home addresses, and more. Doxxing places many individuals – even those far removed from the battlefield – in grave danger.
4. Extremist organizations
Extremist organizations often take advantage of moments of national grief, hijacking current events to promote their own agenda. Numerous far-right Ukrainian actors have taken advantage of the current crisis, joining Ukraine’s fight against Russia while infiltrating the legitimate efforts to promote their extremist political agendas online. Parallel to this, extremist neo-Nazi and white supremacists around the world have been pushing their agendas under the guise of supporting the Ukrainian cause.
5. Hacking
Pre-existing and newly formed hacking groups are highly active, hacking and leaking sensitive information. Oftentimes, hackers forge these documents to promote dangerous propaganda or spread disinformation. From leaking alleged military intelligence to crowdsourcing and leaking intelligence gathering on the Ukrainian military, hackers aim to demoralize and harm the adversary by spreading lies, fear, and despair amongst Ukrainians.
6. Propaganda
As Trust & Safety teams are well aware, Russian state actors have been pushing propaganda to justify the invasion and weaken morale amongst Ukrainians. False claims of Ukrainian military losses, terrorist activities of the Ukrainian army, and narratives supporting Putin’s “denazification” of Ukraine are transmitted across the internet. Propaganda, a tactic that dates back to the 19th century, disseminates dangerously efficiently and must be swiftly removed to reduce harmful exposure. Propaganda negatively impacts on-platform discourse, poisoning healthy and positive interactions between online users.
7. Disinformation and Inauthentic Activity
Disinformation, misinformation, bots, and fake accounts are just a few examples of inauthentic activity happening across online platforms. From well-known disinformation actors creating new, harmful narratives to bots amplifying lies and fraudulent activities, these forms of information operations are growing and becoming more sophisticated.
During times of war, threat actors increase their activities and become increasingly sophisticated. As we’ve seen, unaffiliated entities use different channels and mediums and employ new tactics to avoid detection, wreaking havoc online, and causing real-world harm. Mainstream platforms of all sizes are being weaponized by a growing number of threat actors, forcing Trust & Safety teams to take action.
To effectively contain the threat of information and psychological warfare, online platforms cannot simply rely on the knowledge of state-affiliated entities and disinformation actors. They must understand the multiple actors, mechanisms, narratives, and tactics used to carry out these operations. Proactively monitoring coordinated campaigns and identifying emerging players and trends as they arise enables technology companies to ensure that their platforms are not weaponized in the current conflict or other geopolitical events.
The decision to build or buy content moderation tools is a crucial one for many budding Trust and Safety teams. Learn about our five-point approach to this complex decision.
Recognized by Frost & Sullivan, ActiveFence addresses AI-generated content while enhancing Trust and Safety in the face of generative AI.
Building your own Trust and Safety tool? This blog breaks down for Trust and Safety teams the difference between building, buying, or using a hybrid approach.