Countering Malicious Bots

By
February 16, 2023
A visualization of a bot network with interconnected nodes and digital lines, representing the complexity and reach of malicious bots.

The internet is being flooded with inauthentic traffic that spreads online harm. This flood is actually enabled by bot accounts, which are used to efficiently spread misinformation, child sexual abuse material (CSAM), and terrorist propaganda. These same bots can also be used to coordinate activities to defraud tech companies engaged in advertising. At the forefront of this fight, ActiveFence works to identify coordinated, inauthentic, and harmful activity on platforms. At the same time, our CTI team monitors Underground marketplaces to locate the threat actors behind these dangerous behaviors.

What are Bots?

Before understanding how to counter bots, we must first understand what bots are.

Bots in and of themselves are not necessarily malicious: they are automated pieces of software designed to perform a pre-programmed activity in a manner that imitates humans. Many companies use bots for customer communications, to detect and check for copyright infringements, to locate the lowest prices, or analyze a website’s content to improve SEO ranking. However, when we in Trust & Safety talk about bots, we mostly deal with malicious bots, which generally fall under two categories:

  • Click or download bots: These bots click on or download content to create false engagement statistics that inflate a digital asset’s perceived popularity. These are used for advertising fraud, but they can also be utilized to game algorithms and increase the reach of content shared by information operations.
  • Spambots: These spread material en-masse across the internet. These bots share comments on social media or other UGC-hosting platforms and send emails and links. Threat actors can use them in information operations and for fraudulent activities related to e-commerce and other tech sectors.

In the next sections, we will show how these bots are used to cause harm online.

Bots for Disinformation Operations

Disinformation isn’t new – it has existed long before bots have been around, in fact – long before the internet was invented. While information operations aren’t new, bots now allow disinformation and misinformation to spread quickly, sowing distrust and harming democratic processes.

By tapping into pre-existing interest groups with similar beliefs and interests, disinformation agents can utilize bots to spread false narratives like a virus. The false information infects one user who reshares the false content, which spreads throughout the whole system.

The use of bots for spreading misinformation and disinformation is well documented. Emilio Ferrara, a research professor in data science at the University of Southern California, found that threat actors had engaged 400,000 bots in the political conversation around the US 2016 Presidential election. This subset of bots was responsible for around one-­fifth of all related social media postings. Outside domestic politics, disinformation via bots has become essential in war. In the context of the Russia-Ukraine war, Ukrainian cyber police have found and taken action against many domestic pro-Kremlin bot farms. One operation, dubbed Botofarm, saw 100,000 SIM cards seized and 28 online mobile telephone registration platforms blocked. These bots shared pro-Russian disinformation and propaganda about the ongoing war to weaken Ukrainian morale.

To combat this activity, ActiveFence’s information operations intelligence teams collect signifiers of inauthentic activity. These signals reveal specific accounts on our partner’s platforms that require review. Mapping the metadata of these accounts reveals repeated identifiable information. This data identifies networks of similar bot accounts and those accounts of real individuals and enables Trust & Safety teams to remove an entire disinformation network in one operation.

 

Flowchart illustrating the process of identifying and analyzing inauthentic accounts using metadata analysis.

Inauthentic account detection via metadata analysis

 

Driving Traffic for CSAM and Terrorist Promotion

In addition to spreading misinformation, bots are used by CSAM vendors to promote and sell their illegal content on major social media platforms. To achieve broad engagement, these threat actors simultaneously generate large batches of bot accounts to share explicit CSAM image and video content tagged with specific relevant hashtags. Similarly, terror organizations such as ISIS and al-Qaeda utilize bots to amplify their network resiliency. These bots publicize new terrorist domains to supporters and share new content produced by the central terror organization.

In both CSAM and terror content distribution, the use of bots allows operators to use scale to their advantage while also masking their own identity. This way, if one or several bot accounts are identified and blocked – there is still a chance that others will go undetected, allowing the content to continue spreading.

In ActiveFence’s work countering online terrorist activity, we see that the creation of bots spikes in the days immediately following the release of a new piece of terrorist video content. This characteristic activity is particularly relevant for the ISIS terrorist organization. By focusing on days when bot activity is most likely to take place, our partners can gauge whether an increase in account activity is organic or due to terror-fueled bot activity.

Promotion of Fraudulent Activity

Bots also have many fraudulent applications. In sophisticated phishing campaigns, bots are used to promote a high volume of advertisements for fraudulent offerings, sharing links on domains that offer special deals for items such as Web 3.0 assets. They also manipulate legitimate users to reshare the content, enabling them to convince susceptible users to trust the promoted websites. These users then access the websites and attempt to make a purchase, providing fraudsters with their personal and financial account information.

Another fraud method utilizing bots involves simulating authentic user activity to receive advertising revenue illegally. Using click bots and download bots to interact with content, fraudsters can inflate the view counts and impressions on content or leave inauthentic comments and likes to draw greater attention to a digital asset. Mobile fraud actors run these bots on servers with emulators of various mobile devices and operating systems. From these emulators, bots can download apps and perform an in-app activity to click on ads and other monetized actions.

A Complex Challenge

While the harm generated by bots is clear, the solution to this problem is far from obvious. Recent attempts to take action – either reactively or proactively on bot networks have met significant challenges.

The reactive approach adopted by many platforms is known as IP blacklisting. This process denies access to server-based bots that use a flagged IP address. However, while this activity challenges threat actors, it doesn’t stop them entirely. Threat actors often react by finding unique ways to circumvent identification by switching their servers’ IP addresses and returning to attack the platform and its users anew.

Detecting bot activity has also become more difficult due to growing user sophistication, as AI programs can authentically emulate human-generated text quickly. Now, a bot network operator can modify a single piece of text into many distinct posts that avoid automated detection. In the same way, networked bot activity is staggered so that simultaneous mass actions do not trigger the abused platform’s safeguarding mechanisms.

Proactive attempts also face significant challenges. In one example from December 2022, Twitter identified that bot networks often exploit the services of East Asian mobile telephone carriers. To tackle the problem, the company denied these East Asian carriers access to the platform, effectively creating another problem. While the act did manage to stop the bot networks, it also wound up denying access to authentic users who had enabled 2-factor authentication (2FA).

Subtler approaches to combating bots are therefore needed.

An Intelligence-Led Approach

Locating Bot Activity On-Platform

Bots’ behaviors are based on their specific function. They are used to conduct scams on dating apps, manipulate traffic on social media platforms, and manipulate rankings on online marketplaces. Each activity will have a different signature. As bot operators have enhanced their concealment activities, Trust & Safety teams must engage more resources in intelligence to identify these inauthentic bot accounts.

As an example, key identifiers of bot accounts that are engaged in the promotion of information operations and the promotion of child sexual abuse material and terrorist propaganda include suspicious and repetitive metadata:

  • Multiple accounts repeatedly sharing the same set of URLs or links to groups on instant messaging platforms;
  • Multiple accounts that utilize similar usernames or variations on handles;
  • Multiple accounts sharing the same images for their profile pictures;
  • Multiple users engaged in similar activity that joined the platform on the same date;
  • Multiple account sharing posts about the same subject or article, even if the text is varied;
  • Multiple accounts tagging their content with the same hashtags.

Evaluation of these five criteria can allow accounts to be risk scored for inauthenticity.

The identification methods shared above are important, but access to the threat actor communities that the bots serve and subject matter expertise in the specific threat is critical. By tapping into these communities, teams can understand the tactics used by each operation, allowing them to easily find its on-platform entities, whether by tracing the content it shares or by identifying one or more involved accounts. Once these are mapped, their metadata can be used to find additional entities related to the operation and take organized, rather than targeted action against them.

Marketplaces of Underground Bot Vendors

While many threat actors create bots by running scripts to share content, the more sophisticated bots typically relate to acts of advertising fraud and involve ‘click bots’ and ‘download bots.’ These bots are usually created by specialist vendors and sold in underground markets.

By accessing these marketplaces, Trust & Safety teams can gain access to the sold accounts and collect intelligence that allows them to take direct action to stop bot activity. The mapping of digital signals of acquired bot accounts, can help teams identify similar accounts and actions. Additional insights can be gleaned from this collection, including:

  • The types of bots that are used on their platform, indicating the harmful activities that threat actors take part in.
  • The volume of bot listings and the number of sales per bot category; indicating the demand for bots and the ease of attack
  • The average price for each type of bot listing; indicating the desirability of accounts and penetrability of the platform’s defenses.

ActiveFence’s Approach to Bots

ActiveFence works to provide holistic coverage for Trust & Safety teams to ensure online platform integrity. Our systems and intelligence experts detect and carry out deep threat intelligence and network analysis to locate entities engaged in a wide range of threats, including child abuse, disinformation, terrorism, and cyber threats. With access to sources of online harm on the clear, deep, and dark web and linguistic capabilities in over 100 languages, we offer agile threat intelligence coverage to locate inauthentic activity on your platform, enabling our partners to effectively moderate harmful content and fake and bot accounts.

Want to learn more?

Table of Contents