How Scammers Are Abusing GenAI to Impersonate and Manipulate

By ,
June 6, 2024
Person in a hoodie with red neon glasses standing in a futuristic city at night

Safeguard your platform from potential abusers: Learn more about

ActiveFence's GenAI Safety solutions

For every groundbreaking technology that sweeps the globe, it’s only a matter of time before bad actors find ways to exploit it. Generative AI is no exception. In fact, it has provided scammers with an arsenal of new tools to create and distribute harmful content. The misuse of GenAI by scammers is already widespread and continually evolving as they refine and diversify their tactics.

Drawing on ActiveFence’s study of how threat actors misuse GenAI, this is the first part of a new series of articles that will share our insights. 

In this post, we look at one of the primary methods scammers employ using GenAI tools: impersonation.

 

What Counts as Impersonation?

Impersonation happens when scammers portray someone else’s identity, whether an individual or a company, often altering or disguising parts of that identity to deceive others.

As expected, the widespread popularity of GenAI has made impersonation much easier than before. Two popular ways scammers use AI for impersonation are creating deepfake videos and using AI-generated voiceovers. Deepfake videos use text-to-video and face-swapping tools to create deceptive videos, while AI-generated voiceovers clone people’s voices using text-to-audio models. These tactics are becoming increasingly prevalent and are expected to grow in use for fraud, misinformation, and harmful activities like child exploitation and creating deepfake pornography.

Scammers often create impersonated content in multiple languages to reach a wider audience, particularly targeting communities that may not be as familiar with the technology or its fraudulent applications. 

 

Types of Impersonation Scams

Impersonators exploit the identities of well-known figures or companies to make fraudulent platforms or offerings appear legitimate. They use logos, professional terminology, and familiar imagery to lend credibility to their fake content.

Here are some common uses of impersonation:

  • Financial Scams

Famous figures are often impersonated to promote financial schemes for profit. Common scams typically involve fake investment advice, promises of passive income, and get-rich-quick schemes. High-profile entrepreneurs or successful businessmen are used to add credibility. The main target sectors are crypto and fintech.

One notable incident occurred at the beginning of 2024 when the internet was flooded with headlines linking Elon Musk and his tech companies—Tesla, SpaceX, and X— to dubious crypto exchange offers. Ads and videos across various social media platforms featured Musk prominently, promising Bitcoin giveaways and new trading apps.

Of course, Elon Musk didn’t create these shady crypto trading websites that were promoted by random people on Facebook. The ads and videos were fake, crafted with sophisticated GenAI tools. Yet, some fell for the scam and transferred money to these sites, enticed by Musk’s familiar face and logos. 

The allure of financial advice from a wealthy tech CEO proved irresistible to many. And Musk isn’t the only one whose image has been used in deepfake financial scams—other famous victims include Mark Zuckerberg and Dr. Phil.

Various images of deepfake videos used in generative AI impersonation scams

  • Romance Scams

Romance scams involving fake celebrity profiles have led victims to lose thousands of dollars. In these online catfishing schemes, criminals pose as famous personalities, message victims through social media, and convince them of their celebrity status and affection, sometimes maintaining the relationship for a long time. Eventually, they persuade victims to send money, often under the guise of loans or other false pretenses.

While these scams may seem less convincing than financial ones, many still fall for them. Fraudsters use sophisticated grooming tactics and target vulnerable individuals who are less likely to recognize the deceit.

 

  • Relative/Friend Impersonation Scams

While high-profile scams involving celebrities grab headlines, fake friend scams also leverage GenAI to create convincing deceptions. These scams often target the elderly, deceiving them into believing their loved ones are in distress and need funds wired immediately.

In 2022, relative impostor scams were the second most common racket in the US, with over 2.4 million reports from consumers of people being swindled by those pretending to be friends and family, according to the Federal Trade Commission (FTC). Over 5,100 incidents occurred over the phone, resulting in over $8.8 million in losses.

  • Misinformation Campaigns

While less common than financial scams, misinformation campaigns involving impersonation are a significant issue, especially during election years and periods of political unrest. In these campaigns,  political figures, parties, news agencies, and popular figures are impersonated to spread misleading or false narratives disguised as legitimate information. 

Celebrities, news anchors, and politicians are common targets for impersonation due to the abundance of their real appearance and voice online. These impersonations serve various purposes, from satire to scams to deliberate disinformation.

A recent high-profile example involved an AI-generated audio message impersonating Joe Biden, attempting to dissuade people from voting in the New Hampshire primaries

The evolving sophistication of these misinformation tactics presents a challenge for platform policy teams and regulators in detecting, tracing, and tackling the issue. Key regulations, like the EU’s Digital Services Act and the UK’s Online Safety Act 2023, aim to address these issues, reflecting the significant impact of misinformation and the urgent need to combat it. 

ActiveFence has extensively covered this field, highlighting its dynamic nature and the difficulty of effectively addressing it.

Impersonation Tactics and Methods

  • Hijacked Accounts

Account hijacking is a crucial step in the multi-step process of Impersonation.

Cybercriminals often target social media accounts with large followings, employing sophisticated phishing attacks that involve impersonation tactics. They send deceptive emails disguised as brand collaborations or platform copyright notices, presenting recipients with seemingly legitimate opportunities. These emails coerce people into downloading a file masquerading as a harmless PDF but actually contains malware, like the Redline Infostealer, aimed at stealing the user’s information

Once opened, the malware extracts vital data from the victim’s computer, including session tokens and cookies. This grants attackers direct access to the YouTube account and compromises the channel.

Following the hijacking, attackers manipulate the channel’s username, title, and profile picture, leveraging its high subscriber count to appear credible. They adopt the identities of well-known figures or brands, creating an illusion of authenticity to deceive viewers and perpetrate scams.

Moreover, threat actors not only fabricate accounts for deceptive purposes but also trade hijacked accounts on underground and legitimate marketplaces. These accounts, categorized by audience size and revenue, enable scammers to tailor their schemes to specific target audiences.

  • Deceptive Identity

Deceptive Identity involves creating faux personas to exploit presumed credibility and deceive users. Unlike impersonating real individuals, this tactic involves fabricating non-existent news agencies, government organizations, and businesses. These personas are typically used to promote conspiracy theories and spread misinformation.

Fake AI-generated accounts or websites often gain traffic and subscribers to appear legitimate before spreading misinformation. A prevalent tactic involves creating websites that mimic legitimate news sites but contain low-quality or false information. Operating with little human oversight, these sites give the impression that journalists produced the content while often failing to disclose that the material is AI-generated. This significantly damages media trust.

In a real-life example, ActiveFence researchers identified a deceptive identity disguised as a legitimate news source on a prominent social platform. The account mainly published videos with AI-generated voiceovers of BBC articles over stock footage. Some videos featured a simulated presenter, while others used AI-generated studio backgrounds, presenters, and voices, creating a convincing illusion of authenticity.

Risks Posed by Impersonation Scams

Beyond the obvious risks like financial loss, personal information theft, and reputational damage to public figures and businesses, impersonation scams inflict deeper societal harms. They include:

  • Erosion of Trust: Impersonation scams erode trust in digital platforms, online transactions, news outlets, and legitimate businesses. When people realize that trusted figures or companies can be easily faked, they become more skeptical of genuine communications and offers, negatively impacting online commerce and information sharing.
  • Undermining Democratic Processes: In political contexts, impersonation scams can undermine democratic processes by spreading misinformation and manipulating public opinion. Fake endorsements or statements attributed to political figures can influence voter behavior and distort democratic discourse.
  • Compromised Cybersecurity: Scammers often use impersonation to deliver malware or gain unauthorized access to secure systems. By posing as trusted individuals or entities, they can trick victims into clicking malicious links or providing access credentials, leading to data breaches and other cybersecurity incidents.
  • Degrading the Status of “Truth”: These scams contribute to philosophical and societal damage, as people begin to question the authenticity of what they see and hear. This dilution of what is considered “true” and “real” fuels the spread of conspiracy theories and misinformation, creating a never-ending cycle of doubt and falsehood.

By understanding and addressing these risks, stakeholders can develop more effective strategies to protect people and organizations from the detrimental effects of impersonation scams.

Mitigating the Harm: Actions for AI Providers 

AI companies can take several proactive measures to ensure their technologies are not used to perpetuate impersonation scams and fraud:

  1. Robust Verification Processes: Implement stringent verification processes to ensure that users and content are authentic. These can include multi-factor authentication and real-time content monitoring.
  2. AI Abuse Detection: Develop or deploy tools that can detect and flag potential misuse of AI systems. This includes monitoring for deepfakes and other forms of AI-generated impersonations. However, it’s essential to acknowledge that fraud remains fraud, regardless of its fake origin. Therefore, traditional fraud prevention tactics should serve as the primary defense. After all, the true adversary is fraud itself, not the GenAI tools or the fabricated content they produce.
  3. Promoting Tech Literacy: Educating users on the risks and indicators of scams, as well as effective fraud detection and prevention methods, can be achieved through public awareness campaigns, partnerships with educational institutions, and advocacy for industry guidelines and policies. Empowering users with resources and training enables them to better protect themselves and enhance overall platform security.

By implementing robust fraud detection and prevention strategies and educating their user base, AI companies can reduce scams on their platforms and foster a safer digital environment.

 

In Summary

The rapid advancement of deepfakes, fueled by GenAI technologies, has given scammers powerful tools to deceive and manipulate. From financial scams leveraging the likenesses of famous figures to sophisticated misinformation campaigns and deceptive identities, the misuse of GenAI is a growing threat.

However, the focus for platforms should be on comprehensive fraud prevention, not just detecting GenAI-induced fake content. Combating fraud requires a strategic approach, emphasizing the development of robust tools to identify and prevent abuse at its source and throughout its lifecycle. While the misuse of GenAI is one aspect of the problem, it should not overshadow the goal of stopping harm.

Proactively identifying emerging threats and implementing robust measures to counter fraudulent activities, including the misuse of GenAI, is of utmost importance. As technologies evolve, so too must our strategies to protect against the expanding landscape of digital deception, focusing on safeguarding rather than shunning technological advancements.

Table of Contents

Click to learn more about ActiveFence’s proactive approach to

AI Safety by Design