Manage and orchestrate the entire Trust & Safety operation in one place - no coding required.
Take fast action on abuse. Our AI models contextually detect 14+ abuse areas - with unparalleled accuracy.
Watch our on-demand demo and see how ActiveOS and ActiveScore power Trust & Safety at scale.
The threat landscape is dynamic. Harness an intelligence-based approach to tackle the evolving risks to users on the web.
Don't wait for users to see abuse. Proactively detect it.
Prevent high-risk actors from striking again.
For a deep understanding of abuse
To catch the risks as they emerge
Disrupt the economy of abuse.
Mimic the bad actors - to stop them.
Online abuse has countless forms. Understand the types of risks Trust & Safety teams must keep users safe from on-platform.
Protect your most vulnerable users with a comprehensive set of child safety tools and services.
Stop online toxic & malicious activity in real time to keep your video streams and users safe from harm.
The world expects responsible use of AI. Implement adequate safeguards to your foundation model or AI application.
Implement the right AI-guardrails for your unique business needs, mitigate safety, privacy and security risks and stay in control of your data.
Our out-of-the-box solutions support platform transparency and compliance.
Keep up with T&S laws, from the Online Safety Bill to the Online Safety Act.
Over 70 elections will take place in 2024: don't let your platform be abused to harm election integrity.
Protect your brand integrity before the damage is done.
From privacy risks, to credential theft and malware, the cyber threats to users are continuously evolving.
Your guide on what to build and buy
Safeguard your platform from potential abusers: Learn more about
For every groundbreaking technology that sweeps the globe, it’s only a matter of time before bad actors find ways to exploit it. Generative AI is no exception. In fact, it has provided scammers with an arsenal of new tools to create and distribute harmful content. The misuse of GenAI by scammers is already widespread and continually evolving as they refine and diversify their tactics.
Drawing on ActiveFence’s study of how threat actors misuse GenAI, this is the first part of a new series of articles that will share our insights.
In this post, we look at one of the primary methods scammers employ using GenAI tools: impersonation.
Impersonation happens when scammers portray someone else’s identity, whether an individual or a company, often altering or disguising parts of that identity to deceive others.
As expected, the widespread popularity of GenAI has made impersonation much easier than before. Two popular ways scammers use AI for impersonation are creating deepfake videos and using AI-generated voiceovers. Deepfake videos use text-to-video and face-swapping tools to create deceptive videos, while AI-generated voiceovers clone people’s voices using text-to-audio models. These tactics are becoming increasingly prevalent and are expected to grow in use for fraud, misinformation, and harmful activities like child exploitation and creating deepfake pornography.
Scammers often create impersonated content in multiple languages to reach a wider audience, particularly targeting communities that may not be as familiar with the technology or its fraudulent applications.
Impersonators exploit the identities of well-known figures or companies to make fraudulent platforms or offerings appear legitimate. They use logos, professional terminology, and familiar imagery to lend credibility to their fake content.
Here are some common uses of impersonation:
Famous figures are often impersonated to promote financial schemes for profit. Common scams typically involve fake investment advice, promises of passive income, and get-rich-quick schemes. High-profile entrepreneurs or successful businessmen are used to add credibility. The main target sectors are crypto and fintech.
One notable incident occurred at the beginning of 2024 when the internet was flooded with headlines linking Elon Musk and his tech companies—Tesla, SpaceX, and X— to dubious crypto exchange offers. Ads and videos across various social media platforms featured Musk prominently, promising Bitcoin giveaways and new trading apps.
Of course, Elon Musk didn’t create these shady crypto trading websites that were promoted by random people on Facebook. The ads and videos were fake, crafted with sophisticated GenAI tools. Yet, some fell for the scam and transferred money to these sites, enticed by Musk’s familiar face and logos.
The allure of financial advice from a wealthy tech CEO proved irresistible to many. And Musk isn’t the only one whose image has been used in deepfake financial scams—other famous victims include Mark Zuckerberg and Dr. Phil.
Romance scams involving fake celebrity profiles have led victims to lose thousands of dollars. In these online catfishing schemes, criminals pose as famous personalities, message victims through social media, and convince them of their celebrity status and affection, sometimes maintaining the relationship for a long time. Eventually, they persuade victims to send money, often under the guise of loans or other false pretenses.
While these scams may seem less convincing than financial ones, many still fall for them. Fraudsters use sophisticated grooming tactics and target vulnerable individuals who are less likely to recognize the deceit.
While high-profile scams involving celebrities grab headlines, fake friend scams also leverage GenAI to create convincing deceptions. These scams often target the elderly, deceiving them into believing their loved ones are in distress and need funds wired immediately.
In 2022, relative impostor scams were the second most common racket in the US, with over 2.4 million reports from consumers of people being swindled by those pretending to be friends and family, according to the Federal Trade Commission (FTC). Over 5,100 incidents occurred over the phone, resulting in over $8.8 million in losses.
While less common than financial scams, misinformation campaigns involving impersonation are a significant issue, especially during election years and periods of political unrest. In these campaigns, political figures, parties, news agencies, and popular figures are impersonated to spread misleading or false narratives disguised as legitimate information.
Celebrities, news anchors, and politicians are common targets for impersonation due to the abundance of their real appearance and voice online. These impersonations serve various purposes, from satire to scams to deliberate disinformation.
A recent high-profile example involved an AI-generated audio message impersonating Joe Biden, attempting to dissuade people from voting in the New Hampshire primaries.
The evolving sophistication of these misinformation tactics presents a challenge for platform policy teams and regulators in detecting, tracing, and tackling the issue. Key regulations, like the EU’s Digital Services Act and the UK’s Online Safety Act 2023, aim to address these issues, reflecting the significant impact of misinformation and the urgent need to combat it.
ActiveFence has extensively covered this field, highlighting its dynamic nature and the difficulty of effectively addressing it.
Account hijacking is a crucial step in the multi-step process of Impersonation.
Cybercriminals often target social media accounts with large followings, employing sophisticated phishing attacks that involve impersonation tactics. They send deceptive emails disguised as brand collaborations or platform copyright notices, presenting recipients with seemingly legitimate opportunities. These emails coerce people into downloading a file masquerading as a harmless PDF but actually contains malware, like the Redline Infostealer, aimed at stealing the user’s information
Once opened, the malware extracts vital data from the victim’s computer, including session tokens and cookies. This grants attackers direct access to the YouTube account and compromises the channel.
Following the hijacking, attackers manipulate the channel’s username, title, and profile picture, leveraging its high subscriber count to appear credible. They adopt the identities of well-known figures or brands, creating an illusion of authenticity to deceive viewers and perpetrate scams.
Moreover, threat actors not only fabricate accounts for deceptive purposes but also trade hijacked accounts on underground and legitimate marketplaces. These accounts, categorized by audience size and revenue, enable scammers to tailor their schemes to specific target audiences.
Deceptive Identity involves creating faux personas to exploit presumed credibility and deceive users. Unlike impersonating real individuals, this tactic involves fabricating non-existent news agencies, government organizations, and businesses. These personas are typically used to promote conspiracy theories and spread misinformation.
Fake AI-generated accounts or websites often gain traffic and subscribers to appear legitimate before spreading misinformation. A prevalent tactic involves creating websites that mimic legitimate news sites but contain low-quality or false information. Operating with little human oversight, these sites give the impression that journalists produced the content while often failing to disclose that the material is AI-generated. This significantly damages media trust.
In a real-life example, ActiveFence researchers identified a deceptive identity disguised as a legitimate news source on a prominent social platform. The account mainly published videos with AI-generated voiceovers of BBC articles over stock footage. Some videos featured a simulated presenter, while others used AI-generated studio backgrounds, presenters, and voices, creating a convincing illusion of authenticity.
Beyond the obvious risks like financial loss, personal information theft, and reputational damage to public figures and businesses, impersonation scams inflict deeper societal harms. They include:
By understanding and addressing these risks, stakeholders can develop more effective strategies to protect people and organizations from the detrimental effects of impersonation scams.
AI companies can take several proactive measures to ensure their technologies are not used to perpetuate impersonation scams and fraud:
By implementing robust fraud detection and prevention strategies and educating their user base, AI companies can reduce scams on their platforms and foster a safer digital environment.
The rapid advancement of deepfakes, fueled by GenAI technologies, has given scammers powerful tools to deceive and manipulate. From financial scams leveraging the likenesses of famous figures to sophisticated misinformation campaigns and deceptive identities, the misuse of GenAI is a growing threat.
However, the focus for platforms should be on comprehensive fraud prevention, not just detecting GenAI-induced fake content. Combating fraud requires a strategic approach, emphasizing the development of robust tools to identify and prevent abuse at its source and throughout its lifecycle. While the misuse of GenAI is one aspect of the problem, it should not overshadow the goal of stopping harm.
Proactively identifying emerging threats and implementing robust measures to counter fraudulent activities, including the misuse of GenAI, is of utmost importance. As technologies evolve, so too must our strategies to protect against the expanding landscape of digital deception, focusing on safeguarding rather than shunning technological advancements.
Click to learn more about ActiveFence’s proactive approach to
Over the past year, we’ve learned a lot about GenAI risks, including bad actor tactics, foundation model loopholes, and how their convergence allows harmful content creation and distribution - at scale. Here are the top GenAI risks we are concerned with in 2024.
Create secure Generative AI with this AI Safety By Design framework. It provides four key elements to delivering a safe and reliable Gen AI ecosystem.
Artificial intelligence represents the next great challenge for Trust & Safety teams to wrangle with.