Help protect your personal information

Subscribe to LifeLock Ultimate Plus to help protect against identity fraud that can stem from AI scams.

Help protect your personal information

Subscribe to LifeLock Ultimate Plus to help protect against identity fraud that can stem from AI scams.

Help protect your personal information

Subscribe to LifeLock Ultimate Plus to help protect against identity fraud that can stem from AI scams.

5 AI scams to watch out for in 2025

Artificial intelligence (AI) has provided hackers and scammers with new ways to defraud their victims. AI scams, like AI-voice cloning and AI deepfake scams, can convincingly impersonate your friends and family. Read on to learn more about common AI scams and the consequences, including identity theft. Then, subscribe to LifeLock for powerful identity theft protection you can trust.

An image of a robot, representing AI bot-enabled scams.

What would you do if you got a call from a loved one in an emergency asking for money? You may not think twice about giving them a hand. In reality, your “loved one” may be a hacker using AI-generated voice-cloning technology.

Advances in AI technology have made it easier than ever for scammers to imitate someone on the phone, by text, or even in a video. This is making it harder for people to interact safely online and over the phone—and people of all ages can easily fall victim to AI scams in a growing number of areas, including finance, online dating, social media, and more.

AI technology is advancing rapidly, which means AI scams are likely to become more sophisticated and prolific in the coming years. It’s more important than ever to learn about the threats posed by AI and how to help protect yourself from becoming a victim.

What are AI scams?

AI scams involve fraudulent activity that uses artificial intelligence technology to exploit people—often impersonating a trusted person to access personal information. Common AI scams include AI-voice cloning, fake AI chatbots, and deepfakes.

AI technology is groundbreaking because it can learn and make predictions. For example, the well-known AI tool ChatGPT was trained on vast amounts of human text and speech. Its algorithms identify patterns to learn how to interact with humans naturally. So, when you ask the AI a question, it can reply as a human would. In fact, according to one survey by Tooltester, more than half of people who interact with AI content can’t distinguish it from human-written content, meaning that these tools may have already passed the Turing Test.

While AI technology is exciting, its increasing sophistication is making it much easier for scammers to commit cybercrime such as fraud and identity theft.

AI algorithms can learn a person's appearance and create original photorealistic images of them. It can also learn what people sound like and recreate their voice on a phone call. AI can even combine the two and create a realistic video of them doing or saying almost anything.

When hackers can impersonate someone, scamming their victims into giving up their personally identifiable information becomes much easier.

5 types of emerging AI scams

As AI technology gets better, AI scams of all kinds are becoming more sophisticated, too. And, as a result, cybercriminals are getting more creative in leveraging new AI technology for profit. We’ll show you a few different types of AI scams to look out for, how to spot them, and how to defend against them.

Here are common AI scams to watch out for:

1. AI chatbot scams

AI chatbot scams are fake online chat encounters in which an AI imitates a person, such as a customer support representative. The AI chatbot impersonates a human and typically asks the user for their personal information.

AI chatbots often pose as trustworthy entities, such as technical support on a known website. They may also lure you in by claiming you’ve won a prize, offering investment advice, or pretending to be a match on a dating site.

When you start chatting with an AI chatbot, you might believe it’s a real human. But these signs can help you detect AI chatbot scams:

  • Unnatural responses: AI chatbot responses may have strange wording or an unnatural tone. Or, they may ignore your response completely and push the conversation in a new direction. Of course, it’s important to note that with the rapid advances in AI technology, chatbot responses are sounding more and more natural.
  • Fast responses: AI can respond instantly to your questions or other inputs. Humans generally take more time to consider what you’ve written before replying.
  • Repetition of the same phrases: AI may repeat phrases word for word, especially if you repeat similar questions or try to talk about topics it wasn’t trained on.
  • A quick pitch: AI hackers know that sooner or later, you may figure out the chatbot is not a real human. So, they design the chatbot to make its pitch (e.g., asking for your account number) early in the conversation.

2. AI deepfake scams

An AI deepfake scam involves a fake video of a real—usually famous—person, created by a scammer who has trained an AI tool on that person’s actual videos and vocal recordings. Once the AI has enough data, it can generate videos of the person in almost any situation. Deepfake videos usually feature celebrities or politicians because there are lots of recordings of them for AI to learn from.

A hacker may send you a video from a celebrity you admire asking you to donate to a cause. The link in the video may lead you to a malicious website. Or, a politician you trust may announce a lucrative tax rebate, directing you to a tax scam site that requests your Social Security number.

Deepfakes can be very difficult to detect, and it’s getting easier to deepfake almost anyone due to the amount of video content many of us post on social media. But, one way to verify whether a video is real is to look at where it’s posted. If the video was posted on the person's official account, it’s probably real. But even this isn’t foolproof, as cybercriminals can hack real accounts and post deepfakes on them. And well-known individuals have been known to post deepfakes of other people on their own accounts.

The best way to protect yourself is to avoid clicking links or following the advice you get from videos online.

A graphic illustrating how AI deepfake scams work
A graphic illustrating how AI deepfake scams work
A graphic illustrating how AI deepfake scams work

3. AI investment scams

AI investment scams convince people to give up their money by promising big returns or encourage people to sign up for illegitimate cryptocurrency or stock trading platforms. They do this using AI-driven social engineering tactics like AI phishing, AI deepfakes, and AI chatbots.

Like deepfake scams, a scammer might use AI to impersonate someone, such as a well-known financial expert like Elon Musk or Warren Buffet, and ask you to invest your money. Once you send your money, the scammers disappear with it.

Other AI investment scams may try to convince you of AI’s ability to predict market outcomes, and you may be tempted to sign up for an AI-based investment platform that “guarantees” winning stock or crypto picks. But these platforms often use fake data and testimonials to create an illusion of success.

Once you invest, your funds might vanish. Or, a scammer might use your data to drain your bank account. You could also become a victim of a rug pull—when scammers abandon their project and run away with investors’ money.

To help avoid AI investment scams, never invest money impulsively, especially if the investment advice comes from an online video or article. If an investment seems too good to be true, it’s probably a scam. Remember that hackers are more likely to push cryptocurrency scams, as crypto transactions are anonymous.

If you’re interested in investing with an AI trading platform, ensure it’s registered and regulated by financial regulatory authorities such as the Securities and Exchange Commission (SEC), the Financial Industry Regulatory Authority (FINRA), or a local regulator.

4. AI phishing scams

AI phishing scams are a type of social engineering attack that use AI to manipulate you into giving away sensitive information. In a phishing scam, the attacker often pretends to be someone trustworthy; and now scammers are using AI tools to carry out these scams en masse. Phishing emails have increased by over 1,200% since late 2022, largely thanks to the use of generative AI tools.

In the past, scammers had to do research and write personalized messages themselves. AI tools can help create often-convincing messages quickly. The most sophisticated scams, which could imitate someone’s voice or appearance, were costly and rare.

Today, AI tools can generate numerous personalized phishing messages in seconds. Hackers can fully automate their phishing efforts, and they can imitate voices and appearances at little cost.

For example, AI tools could crawl social media sites like LinkedIn, potentially gathering data on thousands of individuals, including their work histories, posts, and accomplishments. It could then write highly personalized messages to all of them with malware attached. Even a low open rate could lead to multiple hacked accounts, putting their personal data at risk.

A graphic illustrating the dangers of AI phishing scams
A graphic illustrating the dangers of AI phishing scams
A graphic illustrating the dangers of AI phishing scams

5. AI voice cloning scams

AI voice cloning scams use AI technology to imitate a real person’s voice. Scammers then send voice messages on social media or make phone calls pretending to be a target’s loved one or a celebrity. AI voice scams are like audio-only versions of deepfakes. Simple versions are pre-recorded messages sent via phone apps or social media. For example, you might receive a scam call from a politician asking you to donate to their campaign via a malicious website.

More sophisticated AI voice scams enable real-time voice cloning, where scammers can manipulate their voice to sound like someone else during a live conversation. You could get a live call from your boss asking for urgent access to sensitive company info. Or, you could get a call from a loved one in an emergency asking for a cash transfer.

To help avoid AI voice scams, never follow instructions given by pre-recorded voice messages, especially if they tell you to fill out forms with your personal info, click links in a message they’ll send, or visit websites.

If a friend or family member calls you with an urgent request for money or information, ask them a few questions to verify their identity. Scammers can imitate the voices of friends and family, but they don’t know all the details of your relationships. Ask questions like, “Where and when did we last meet?” or slip in a fake comment about what you last did together and see how they respond.

Better yet, hang up and call them back immediately using their contact info in your phone. The real person will verify if they just called you or not.

How to protect against AI scams

To help protect against new AI scams, stay informed about the increasing capabilities of AI. When you understand how AI is used for scams, you’re more likely to recognize when someone is trying to fool you. It’s also important to beef up security on your devices. Even if hackers manage to scam you, strong security controls may help protect you against identity theft.

Here’s how to best protect against AI scams:

  • Look out for impersonation: If an interaction doesn’t seem genuine, you may be getting scammed. Always remember that the “person” you’re interacting with online may not be who you think they are.
  • Set up two-factor authentication: When you set up 2FA on an online account (such as your bank account), you’ll need to use two identifying factors to enter your account, such as a password and a text sent to your phone. To hack into your account, a scammer would also need access to both. 
  • Verify information: Don’t immediately trust what people tell you online or on the phone. Cross-check the info to verify that it’s correct. When you get a suspicious call or email, check the number or email address. For example, if you receive an email with a video of someone famous giving investment advice, research what other experts have to say.  
  • Avoid giving out personal information: It’s never a good idea to give out your personally identifiable information online. If anyone asks for your SSN, driver’s license number, passport number, etc., you may be getting scammed.
  • Secure your social media profiles: Personal information on your social media feeds can help hackers piece together info to scam you or imitate you. Set your profiles to private and avoid posting personal info or oversharing on social media.
  • Don’t act immediately: Scammers know their AI tricks won’t fool you for long, so they’ll push you to act urgently. Take the time to verify any offer before making a decision—real offers likely don’t expire in minutes or hours. 
  • Use a scam detection tool: Noton Genie leverages the power of AI technology to help detect and fight AI scams. Simply upload a suspicious text, email, social media post, or website address, and you’ll know in seconds if it might be a scam.

What to do if you get scammed with artificial intelligence

If you fall victim to an AI scam, you are at risk of financial exploitation or identity theft. If your personal data was breached, follow the tips below to protect yourself from further harm.

  • Report the scam to the FTC: The Federal Trade Commission is the U.S. government agency that fights AI scammers and cybercriminals. The best way to report identity theft is to go to the FTC website, where you can fill in all the important information related to the theft you experienced.
  • Freeze your credit: Freezing your credit will prevent scammers from using your personal information like your Social Security number to apply for loans or open new credit cards. Contact the major credit bureaus (Equifax, Experian, and TransUnion) to request a credit freeze.
  • Set up fraud alerts: Call your bank and credit card companies to put fraud alerts on all of your accounts. Fraud alerts inform these banks to take extra steps to verify your identity before new accounts are opened.
  • Notify your financial institutions: Inform your bank and credit card companies that your personal information has or may have been stolen. Many of these institutions will help monitor your accounts for suspicious activity. 
  • Change passwords: Change all of your passwords, especially if you use the same password for multiple accounts. Then, make sure to create strong and unique passwords on all your accounts.
  • Monitor your credit reports and accounts: Check your credit reports and account activity often for any unauthorized activity.
  • Secure your devices: Scan your devices for malware using a virus scanner, as scammers may have infected your device. Update your devices to ensure you have the latest security features.

How are AI scams evolving?

AI technology is advancing rapidly, and scammers try to stay on the cutting edge of its capabilities, leveraging AI for deepfakes, vishing attacks, AI chatbot scams, and other fraudulent activities. This can make it difficult for people and the authorities to keep up with new types of AI scams.

AI scams have been around since the mid-to-late 2010s, when automated spear phishing scams became more common. Around the same time, AI voice scams and deepfakes emerged. One of the first major AI deepfake scams to get reported happened in 2019, when hackers impersonated the CEO of an energy company requesting an emergency transfer of funds.

At the time, AI deepfakes and voice scams were costly and difficult to engineer, but that all changed in 2022 when advanced generative AI tools became more easily available to the public. These products, such as ChatGPT, learn from prompts and generate highly personalized content at scale, making phishing emails, fake news, and fraudulent social media posts more convincing.

Today, we’ve entered the age of AI “scam-as-a-service.” Developers are creating AI scam tools that perform a variety of automated functions, such as collecting user data from social media, sending phishing emails, and more. They can then sell these tools on the dark web, turning script kiddies (amateur hackers) into major threats.

AI impersonation scams will likely become even more sophisticated and harder to spot as hackers begin integrating multiple AI technologies into their scams. For example, they might use AI voice cloning and AI chatbots to have convincing conversations via text or social media. AI dating and romance scams like catfishing might also spike as a result.

Bounce back from AI scams

AI technology—and AI scams—are evolving fast. In fact, three out of every four consumers are concerned AI might increase cybercriminals' ability to scam them. Staying vigilant and trusting your gut may not be enough to keep you safe from an AI scam anymore. But using a trusted identity theft protection service like LifeLock Standard can help protect your personal data from the consequences of AI scams.

If you fall victim to identity theft, LifeLock will restore your identity, guaranteed.1 And with a LifeLock subscription, you’ll also get alerts for new bank account applications in your name or attempts to take over existing accounts.2 Plus, you’ll get priority, U.S.-based restoration support should the worst happen.

FAQs about AI scams

AI scams are on the rise. If you have more questions on how to protect yourself from AI scams, find the answers below.

What AI is being used for scams?

The most common AI used in scams today is generative AI. This type of AI technology can generate new content, such as images, text, and video. Scammers can use free or cheap generative AI tools like ChatGPT, Claude, and LaMDA to generate phishing emails, fake images, fake news articles, and more. 

How do you protect yourself against AI?

To help protect against AI scams, be careful of videos, texts, and voice messages you receive online. Remember that AI scams will often impersonate someone you know or admire. Avoid clicking links or visiting unknown websites sent to you via email or social media. Choose strong, unique passwords and set up 2FA for your accounts. Then, set your social media pages to private to help prevent attackers from targeting or imitating you.

How can I detect AI scam calls?

You can help protect yourself from scam calls by trusting your gut and using call screeners. If you think a scammer is impersonating one of your contacts, cross-check the caller’s information with the information you have in your contacts. You can also ask them personal questions related to your relationship to verify their identity.

1 Restrictions apply. Automatically renewing subscription required. If you are a victim of identity theft and not satisfied with our resolution, you may receive a refund for the current term of your subscription. See LifeLock.com/Guarantee for complete details.

2 We do not monitor all transactions at all businesses.

Editor’s note: Our articles provide educational information. LifeLock offerings may not cover or protect against every type of crime, fraud, or threat we write about.

This article contains

    4 Lasting Effects of Identity Theft
    The impact of identity theft can last for months or years. In this article, we’ll discuss the 4 different ways victims can be affected by identity theft.
    February 04, 2021 ·3 Minutes
    Read More
    Credit card fraud: 4 types + how to protect yourself
    Preventing credit card fraud means keeping your credit information safe and secure. But it also means being proactive and reporting fraud quickly.
    July 25, 2024 ·12 min read
    Read More
    10 warning signs of identity theft
    Unlike with other crimes, it’s entirely possible for you to be a victim of identity theft and not even know it. Here are ten warning signs.
    July 25, 2022 ·3 Minutes
    Read More
    What is a data breach and how do I help prevent one?
    A data breach is an incident that exposes confidential or protected information. Learn how to help keep your info safe and read about recent data breaches here.
    July 11, 2023
    Read More

    Start your protection,
    enroll in minutes.

    Get discounts, info, protection tips, and more.

    Sign up for promotional emails.