What the experts say
"Generative AI has lowered the cost of persuasion and raised the polish of deception." - Gen 2025 Threat Report
Threat Research Team, Gen
What would you do if you got a call from a loved one in an emergency asking for money? You may not think twice about giving them a hand. In reality, your “loved one” may be a hacker using AI-generated voice-cloning technology.
Advances in AI technology have made it easier than ever for scammers to imitate someone on the phone, by text, or even in a video. This is making it harder for people to interact safely online and over the phone—and people of all ages can easily fall victim to AI scams in a growing number of areas, including finance, online dating, social media, and more.
What the experts say
"Generative AI has lowered the cost of persuasion and raised the polish of deception." - Gen 2025 Threat Report
Threat Research Team, Gen
AI technology is advancing rapidly, which means AI scams are likely to become more sophisticated and prolific in the coming years. It’s more important than ever to learn about the threats posed by AI and how to help protect yourself from becoming a victim.
AI scams involve fraudulent activity that uses artificial intelligence technology to exploit people—often impersonating a trusted person to access personal information. Common AI scams include AI-voice cloning, fake AI chatbots, and deepfakes.
AI technology is groundbreaking because it can learn and make predictions. For example, the well-known AI tool ChatGPT was trained on vast amounts of human text and speech. Its algorithms identify patterns to learn how to interact with humans naturally. So, when you ask the AI a question, it can reply as a human would. In fact, according to one survey by Tooltester, more than half of people who interact with AI content can’t distinguish it from human-written content, meaning that these tools may have already passed the Turing Test.
While AI technology is exciting, its increasing sophistication is making it much easier for scammers to commit cybercrime such as fraud and identity theft.
AI algorithms can learn a person's appearance and create original photorealistic images of them. It can also learn what people sound like and recreate their voice on a phone call. AI can even combine the two and create a realistic video of them doing or saying almost anything.
When hackers can impersonate someone, scamming their victims into giving up their personally identifiable information becomes much easier.
As AI technology gets better, AI scams of all kinds are becoming more sophisticated, too. And, as a result, cybercriminals are getting more creative in leveraging new AI technology for profit. We’ll show you a few different types of AI scams to look out for, how to spot them, and how to defend against them.
Here are common AI scams to watch out for:
AI chatbot scams are fake online chat encounters in which an AI imitates a person, such as a customer support representative. The AI chatbot impersonates a human and typically asks the user for their personal information.
AI chatbots often pose as trustworthy entities, such as technical support on a known website. They may also lure you in by claiming you’ve won a prize, offering investment advice, or pretending to be a match on a dating site.
When you start chatting with an AI chatbot, you might believe it’s a real human. But these signs can help you detect AI chatbot scams:
An AI deepfake scam involves a fake video of a real—usually famous—person, created by a scammer who has trained an AI tool on that person’s actual videos and vocal recordings. Once the AI has enough data, it can generate videos of the person in almost any situation. Deepfake videos usually feature celebrities or politicians because there are lots of recordings of them for AI to learn from.
What the experts say
"Cybercriminals continue to refine their methods, tying multiple advanced attack techniques into a single campaign that demonstrates the latest trend in ‘Scam‑Yourself’ attacks." - Gen Blog, 2025
Luis Corrons & Jan Rubín, Threat Research Team
A hacker may send you a video from a celebrity you admire asking you to donate to a cause. The link in the video may lead you to a malicious website. Or, a politician you trust may announce a lucrative tax rebate, directing you to a tax scam site that requests your Social Security number.
Deepfakes can be very difficult to detect, and it’s getting easier to deepfake almost anyone due to the amount of video content many of us post on social media. But, one way to verify whether a video is real is to look at where it’s posted. If the video was posted on the person's official account, it’s probably real. But even this isn’t foolproof, as cybercriminals can hack real accounts and post deepfakes on them. And well-known individuals have been known to post deepfakes of other people on their own accounts.
The best way to protect yourself is to avoid clicking links or following the advice you get from videos online.
AI investment scams convince people to give up their money by promising big returns or encourage people to sign up for illegitimate cryptocurrency or stock trading platforms. They do this using AI-driven social engineering tactics like AI phishing, AI deepfakes, and AI chatbots.
Like deepfake scams, a scammer might use AI to impersonate someone, such as a well-known financial expert like Elon Musk or Warren Buffet, and ask you to invest your money. Once you send your money, the scammers disappear with it.
Other AI investment scams may try to convince you of AI’s ability to predict market outcomes, and you may be tempted to sign up for an AI-based investment platform that “guarantees” winning stock or crypto picks. But these platforms often use fake data and testimonials to create an illusion of success.
Once you invest, your funds might vanish. Or, a scammer might use your data to drain your bank account. You could also become a victim of a rug pull—when scammers abandon their project and run away with investors’ money.
To help avoid AI investment scams, never invest money impulsively, especially if the investment advice comes from an online video or article. If an investment seems too good to be true, it’s probably a scam. Remember that hackers are more likely to push cryptocurrency scams, as crypto transactions are anonymous.
If you’re interested in investing with an AI trading platform, ensure it’s registered and regulated by financial regulatory authorities such as the Securities and Exchange Commission (SEC), the Financial Industry Regulatory Authority (FINRA), or a local regulator.
AI phishing scams are a type of social engineering attack that use AI to manipulate you into giving away sensitive information. In a phishing scam, the attacker often pretends to be someone trustworthy; and now scammers are using AI tools to carry out these scams en masse. Phishing emails have increased by over 1,200% since late 2022, largely thanks to the use of generative AI tools.
In the past, scammers had to do research and write personalized messages themselves. AI tools can help create often-convincing messages quickly. The most sophisticated scams, which could imitate someone’s voice or appearance, were costly and rare.
Today, AI tools can generate numerous personalized phishing messages in seconds. Hackers can fully automate their phishing efforts, and they can imitate voices and appearances at little cost.
For example, AI tools could crawl social media sites like LinkedIn, potentially gathering data on thousands of individuals, including their work histories, posts, and accomplishments. It could then write highly personalized messages to all of them with malware attached. Even a low open rate could lead to multiple hacked accounts, putting their personal data at risk.
AI voice cloning scams use AI technology to imitate a real person’s voice. Scammers then send voice messages on social media or make phone calls pretending to be a target’s loved one or a celebrity. AI voice scams are like audio-only versions of deepfakes. Simple versions are pre-recorded messages sent via phone apps or social media. For example, you might receive a scam call from a politician asking you to donate to their campaign via a malicious website.
More sophisticated AI voice scams enable real-time voice cloning, where scammers can manipulate their voice to sound like someone else during a live conversation. You could get a live call from your boss asking for urgent access to sensitive company info. Or, you could get a call from a loved one in an emergency asking for a cash transfer.
To help avoid AI voice scams, never follow instructions given by pre-recorded voice messages, especially if they tell you to fill out forms with your personal info, click links in a message they’ll send, or visit websites.
If a friend or family member calls you with an urgent request for money or information, ask them a few questions to verify their identity. Scammers can imitate the voices of friends and family, but they don’t know all the details of your relationships. Ask questions like, “Where and when did we last meet?” or slip in a fake comment about what you last did together and see how they respond.
Better yet, hang up and call them back immediately using their contact info in your phone. The real person will verify if they just called you or not.
To help protect against new AI scams, stay informed about the increasing capabilities of AI. When you understand how AI is used for scams, you’re more likely to recognize when someone is trying to fool you. It’s also important to beef up security on your devices. Even if hackers manage to scam you, strong security controls may help protect you against identity theft.
Here’s how to best protect against AI scams:
If you fall victim to an AI scam, you are at risk of financial exploitation or identity theft. If your personal data was breached, follow the tips below to protect yourself from further harm.
AI technology is advancing rapidly, and scammers try to stay on the cutting edge of its capabilities, leveraging AI for deepfakes, vishing attacks, AI chatbot scams, and other fraudulent activities. This can make it difficult for people and the authorities to keep up with new types of AI scams.
AI scams have been around since the mid-to-late 2010s, when automated spear phishing scams became more common. Around the same time, AI voice scams and deepfakes emerged. One of the first major AI deepfake scams to get reported happened in 2019, when hackers impersonated the CEO of an energy company requesting an emergency transfer of funds.
At the time, AI deepfakes and voice scams were costly and difficult to engineer, but that all changed in 2022 when advanced generative AI tools became more easily available to the public. These products, such as ChatGPT, learn from prompts and generate highly personalized content at scale, making phishing emails, fake news, and fraudulent social media posts more convincing.
Today, we’ve entered the age of AI “scam-as-a-service.” Developers are creating AI scam tools that perform a variety of automated functions, such as collecting user data from social media, sending phishing emails, and more. They can then sell these tools on the dark web, turning script kiddies (amateur hackers) into major threats.
AI impersonation scams will likely become even more sophisticated and harder to spot as hackers begin integrating multiple AI technologies into their scams. For example, they might use AI voice cloning and AI chatbots to have convincing conversations via text or social media. AI dating and romance scams like catfishing might also spike as a result.
AI technology—and AI scams—are evolving fast. Staying vigilant and trusting your gut may not be enough to keep you safe from an AI scam anymore. But using a trusted identity theft protection service like LifeLock Standard can help protect your personal data from the consequences of AI scams.
If you fall victim to identity theft, LifeLock will restore your identity, guaranteed.1 And with a LifeLock subscription, you’ll also get alerts for new bank account applications in your name or attempts to take over existing accounts.2 Plus, you’ll get priority, U.S.-based restoration support should the worst happen.
AI scams are on the rise. If you have more questions on how to protect yourself from AI scams, find the answers below.
The most common AI used in scams today is generative AI. This type of AI technology can generate new content, such as images, text, and video. Scammers can use free or cheap generative AI tools like ChatGPT, Claude, and LaMDA to generate phishing emails, fake images, fake news articles, and more.
To help protect against AI scams, be careful of videos, texts, and voice messages you receive online. Remember that AI scams will often impersonate someone you know or admire. Avoid clicking links or visiting unknown websites sent to you via email or social media. Choose strong, unique passwords and set up 2FA for your accounts. Then, set your social media pages to private to help prevent attackers from targeting or imitating you.
You can help protect yourself from scam calls by trusting your gut and using call screeners. If you think a scammer is impersonating one of your contacts, cross-check the caller’s information with the information you have in your contacts. You can also ask them personal questions related to your relationship to verify their identity.
1 Restrictions apply. Automatically renewing subscription required. If you are a victim of identity theft and not satisfied with our resolution, you may receive a refund for the current term of your subscription. See LifeLock.com/Guarantee for complete details.
2 We do not monitor all transactions at all businesses.
Editor’s note: Our articles provide educational information. LifeLock offerings may not cover or protect against every type of crime, fraud, or threat we write about.
LifeLock is part of Gen – a global company with a family of trusted brands.
Copyright © 2025 Gen Digital Inc. All rights reserved. Gen trademarks or registered trademarks are property of Gen Digital Inc. or its affiliates. Firefox is a trademark of Mozilla Foundation. Android, Google Chrome, Google Play and the Google Play logo are trademarks of Google, LLC. Mac, iPhone, iPad, Apple and the Apple logo are trademarks of Apple Inc., registered in the U.S. and other countries. App Store is a service mark of Apple Inc. Alexa and all related logos are trademarks of Amazon.com, Inc. or its affiliates. Microsoft and the Window logo are trademarks of Microsoft Corporation in the U.S. and other countries. The Android robot is reproduced or modified from work created and shared by Google and used according to terms described in the Creative Commons 3.0 Attribution License. Other names may be trademarks of their respective owners.