With new technologies on the rise, cyber criminals are turning to AI to help them advance more sophisticated and more relentless fraud attacks on their targets. They rely on ignorance and trust, hoping that their targets will perceive these AI-powered scams at face value and not recognize them for what they are.
According to recent PEW research and polling, most Americans have serious concerns regarding the intersection of AI technological advances and the rise of increasingly sophisticated impersonation scams. 7 in 10 Americans believe that the pervasiveness of AI will make online scams and fraud attacks more common.1 McAfee Security reports that “the average person encounters 10 scams daily, while Americans face 14.4 scams daily, including 2.6 deepfake videos.2 As these types of scams increase in frequency, it is vital for individuals and businesses to rely on trusted sources for information regarding different types of AI fraud as well as how best to safeguard against these attacks.
Types of AI Fraud
- Deepfake videos attack their targets by assuming the identity of someone that the victim knows and trusts such as a family member, a friend, or a colleague. AI technology has streamlined the creation of deepfakes and with every advance, these videos become more and more convincing and difficult to detect. According to the World Economic Forum, in 2023 there was a 704% increase in these types of attacks, with AI technologies facilitating the creation of convincing deepfakes.1
- Voice cloning is related to deepfake technology and employs AI to alter the scammer’s voice so that they sound like someone you know. A cyber criminal will obtain recordings of the person that they wish to impersonate and feed these recordings into an AI-powered voice-cloning program in order to create a convincing simulation.
- Synthetic identities are fake identities (often a faked SSN, shippable address, and other identity-related credentials) that scammers use in order to hide bad credit ratings and obtain loans that they have no intention of repaying. Scammers use generative AI technologies to quickly create new accounts and fraudulent credentials, using these synthetic identities to dupe businesses and individuals.
- Impersonation attacks can occur through a combination of faked identities, deepfakes, voice cloning, all with the intention of mimicking the voice, writing style, or other attributes of a person known to the victim. These attacks can have serious consequences: scammers have used AI to impersonate CEOs and other top business executives, in one instance managing to steal $25M from their target.2
Impact on Businesses
AI fraud does more than threaten financial loss and data protection. It can cause reputational damage and operational disruption, resulting in lost time and expenses and a potential loss of customers. In the last several years, Forbes has reported incidents of fraud directly caused by AI that have resulted in high-profile losses for businesses: in 2021, a business lost 35 million dollars due to a successful impersonation fraud that utilized deepfake technology while in 2019, deep voice technology allowed scammers to steal $243,000 from a British company while posing as a CEO.3 Additionally, scams that involve stolen identities and data breaches directly and negatively impact consumer perceptions of a company’s security system and data safeguards.
Detection and Prevention
There are several pro-active steps that businesses can take to reduce the impact of these new AI-powered scams and the threats that they pose to a company’s finances and data:
- Invest in employee training. Offer fraud awareness training that empowers employees to understand the types of threats that are most commonly levied against businesses and how AI technologies power these scams to make them more sophisticated and recurrent.
- Use Multi-factor Authentication (MFA) and verification protocols. Whenever possible, use multi-factor authentication to keep your accounts safe and limit the ability of scammers to easily access your data. Never click suspicious links. If you receive a communication via email, phone, or video chat and something seems off or oddly urgent about the communication, verify that the communication was authentic. Do not respond directly to any suspicious
- Stay informed. AI technologies and adjacent criminal activity are constantly evolving. Stay up to date on the latest scams that cyber criminals are employing with consumer resources such as:
○ Internet Crime Compliant Center (IC3) FBI: www.ic3.gov
○ National Consumers League's Fraud Center: www.fraud.org
○ The Federal Trade Commission: www.ftc.gov
○ Federal Bureau of Investigations: www.fbi.gov/scams-safety
○ National Cyber Security Alliance: www.staysafeonline.org
1 https://www.pewresearch.org/internet/2025/07/31/online-scams-and-attacks-in-america-today/
2 https://www.mcafee.com/blogs/internet-security/state-of-the-scamiverse/
3 https://www.weforum.org/stories/2025/01/how-ai-driven-fraud-challenges-the-global-economyand-ways-to-combat-it/
4 https://www.cfodive.com/news/scammers-siphon-25m-engineering-firm-arup-deepfake-cfoai/716501/
5 https://www.forbes.com/councils/forbestechcouncil/2025/03/10/ai-driven-phishing-and-deepfakes-the-future-of-digital-fraud/