Table of Contents

  1. The Deepfake Revolution in Fraud
  2. CEO Fraud and Business Deepfake Calls
  3. Business Email Compromise Enhanced by Video
  4. Romance Scams Using Deepfake Video Calls
  5. Identity Theft Through AI Video
  6. Investment Scams Using Deepfake Endorsements
  7. Deepfake Extortion and Blackmail
  8. How to Detect Deepfakes in 2026
  9. Corporate and Personal Protection Strategies
  10. Resources

The Deepfake Revolution in Fraud

In February 2024, a finance worker at a multinational company in Hong Kong was tricked into transferring $25.6 million to criminal accounts after attending a video conference call where every other participant -- including the company's CFO -- was a deepfake. The employee had initial suspicions after receiving what appeared to be a phishing email, but those concerns were overridden when they saw and heard familiar colleagues on the video call. Every face, every voice, every mannerism was AI-generated. The entire meeting was fake.

This case, widely reported in global media, marked a turning point in the history of fraud. It demonstrated that AI-powered deepfakes had evolved from a theoretical threat into an operational weapon capable of stealing tens of millions of dollars in a single attack. And it was only the beginning.

By 2026, deepfake technology has become dramatically more accessible and convincing. Real-time face-swapping tools that can generate photorealistic deepfake video during a live call are available as consumer software. Voice cloning requires as little as 3 seconds of audio to produce a convincing replica of anyone's voice. The tools that were restricted to nation-state actors and well-funded criminal organizations five years ago are now available to anyone with a laptop and an internet connection.

This guide covers the six most dangerous categories of video call scams and deepfake fraud active in 2026, along with detection techniques and protection strategies for both individuals and organizations.

Critical Warning

Video is no longer proof of identity. A person appearing on a video call who looks and sounds like someone you know may be a deepfake. Any request for money, sensitive information, or unusual actions made via video call should be verified through a separate, independently initiated communication channel before being acted upon.

1. CEO Fraud and Business Deepfake Calls

Critical Risk

How CEO Deepfake Fraud Works

Attackers create deepfake video and audio of corporate executives and use it in video conference calls to authorize fraudulent wire transfers, change payment details, or extract confidential information. The deepfake impersonates the CEO, CFO, or other senior leader on a live call with an employee who has the authority to execute financial transactions.

CEO fraud via deepfake video calls represents the most financially devastating application of deepfake technology in the criminal landscape. The attack typically targets finance department employees, executive assistants, or treasury managers -- anyone with the ability to authorize or execute large financial transactions.

The attack follows a consistent pattern. First, the criminals gather source material: public videos of the target executive from earnings calls, conference presentations, media interviews, and social media. This footage is used to train the deepfake model on the executive's appearance, facial expressions, mannerisms, and speech patterns. Voice cloning tools process audio from the same sources to create a synthetic voice that matches the executive's tone, cadence, and accent.

Next, the criminals study the organization's structure, communication patterns, and financial processes. They identify who reports to whom, which employees can authorize payments, what communication channels are used for financial requests, and what transaction sizes are normal. This intelligence is gathered through social engineering, LinkedIn research, and sometimes through prior email compromises.

The attack itself is typically a video call initiated through the organization's standard conferencing platform -- Zoom, Microsoft Teams, or Google Meet. The employee receives a meeting invitation that appears to come from the executive. On the call, they see and hear what appears to be their CEO or CFO instructing them to make an urgent payment, often related to a plausible scenario: a confidential acquisition, a regulatory settlement, or an emergency vendor payment. The urgency, authority of the apparent requestor, and the visual confirmation of the video all combine to override the employee's normal skepticism.

The $25.6 million Hong Kong case was not unique. In 2025, deepfake CEO fraud attempts were reported across financial services, technology, manufacturing, and healthcare industries. The FBI's Internet Crime Complaint Center reported a 400% increase in business deepfake fraud reports compared to 2023. Losses from confirmed cases exceeded $200 million in the US alone.

Red Flags to Watch For

How to Protect Your Organization

2. Business Email Compromise Enhanced by Video

Critical Risk

How Video-Enhanced BEC Works

Traditional Business Email Compromise (BEC) -- where attackers impersonate executives via email to request fraudulent payments -- is now augmented with deepfake video calls that confirm the fraudulent email's instructions. The video call eliminates the primary defense against BEC: the victim's instinct to verify unusual email requests by seeing or speaking with the purported sender.

BEC has been the most financially damaging category of cybercrime for years. The FBI's IC3 reported $2.9 billion in BEC losses in 2023 alone. The primary defense that organizations developed was simple: "If you receive an unusual financial request via email, verify it with a phone or video call." Deepfake technology has now undermined that defense.

The combined attack works as follows: the attacker sends a BEC email requesting a payment or wire transfer. The employee, following their training, schedules a video call to verify the request. The attacker joins the call using a deepfake of the executive who supposedly sent the email. The employee sees their "colleague" on video, hears their voice, and receives verbal confirmation of the fraudulent instructions. Every verification step the employee was trained to follow has been satisfied -- by the attacker.

This evolution makes traditional BEC defenses insufficient. Organizations that relied on "call to verify" as their primary control against BEC must now assume that video and voice verification alone cannot be trusted.

3. Romance Scams Using Deepfake Video Calls

Critical Risk

How Deepfake Romance Scams Work

Romance scammers use real-time deepfake video during video calls to appear as the attractive persona they have presented to their victim. This overcomes the traditional defense of "insist on a video call" that was previously effective at exposing romance scammers who used stolen photos. The victim sees a face that matches the photos, eliminating their last line of defense.

Romance scams -- known as "pig butchering" when they lead to investment fraud -- have traditionally been defeated by one simple request: "Let's do a video call." Scammers using stolen photos of models or attractive individuals could not appear on video as the person in the photos. Victims who insisted on video calls could catch the deception.

Deepfake technology has eliminated this safeguard. Modern real-time face-swapping tools can transform the scammer's face into the face from the stolen photos during a live video call. The quality is now sufficient to convince most people, especially when viewed on a phone screen where resolution is limited and the emotional context (excitement about finally "seeing" the person) reduces critical evaluation.

The consequences are severe. Romance scam victims who "verified" their partner through video calls feel even more certain of the relationship's legitimacy. This deeper investment of trust leads to larger financial losses. The average romance scam loss reported to the FTC in 2025 was $64,000, but cases involving deepfake video verification show losses 2-3x higher because the victim's confidence was reinforced by what they believed was visual proof.

How to Protect Yourself

4. Identity Theft Through AI Video

Critical Risk

How AI Video Identity Theft Works

Criminals use deepfake technology to impersonate real individuals during video-based identity verification processes. Banks, cryptocurrency exchanges, and other financial services that use video KYC (Know Your Customer) as a security measure are targeted by deepfakes that pass automated and human-reviewed identity checks.

As more financial services move to video-based identity verification -- where a user holds up their ID and speaks into the camera to prove they are who they claim to be -- criminals have adapted by using deepfakes to pass these checks. A deepfake of the account holder's face, combined with a high-quality replica of their ID, can pass both automated liveness detection and human review.

This enables attackers to: open bank accounts in other people's names, take over existing accounts by passing "identity verification" during support calls, apply for loans and credit cards using stolen identities, and access cryptocurrency exchange accounts. The damage compounds because the victim often does not discover the identity theft until accounts have been opened, funds have been moved, and their credit has been damaged.

Deepfake-based KYC fraud is particularly prevalent in the cryptocurrency industry, where many exchanges have adopted video verification as their primary security measure. Attackers who obtain a photo ID (through data breaches, stolen mail, or dark web purchases) can create a deepfake of the ID holder and complete the verification process to gain full access to the exchange account.

5. Investment Scams Using Deepfake Endorsements

High Risk

How Deepfake Investment Scams Work

Scammers create deepfake videos of celebrities, business leaders, and financial experts endorsing fraudulent investment opportunities. These videos are distributed through social media ads, YouTube, and messaging platforms to drive victims toward Ponzi schemes, fake trading platforms, and rug pulls.

Deepfake endorsement videos have become the most effective marketing tool in the investment scam ecosystem. A 60-second deepfake of Elon Musk, Warren Buffett, or a popular financial YouTuber recommending a specific cryptocurrency, trading platform, or investment opportunity can be produced for under $100 and reach millions of potential victims through targeted social media advertising.

These videos are devastatingly effective because they exploit the authority principle -- people trust recommendations from figures they admire or perceive as credible. When a victim sees a video of a respected financial commentator enthusiastically endorsing a trading platform, they are less likely to conduct independent due diligence. The video "is" the due diligence, from their perspective.

The scale of this problem is staggering. Social media platforms struggle to remove deepfake investment ads quickly enough. A single deepfake video can be reuploaded to thousands of accounts across dozens of platforms within hours of its initial removal. By the time the content is fully suppressed, it has already been viewed millions of times and driven substantial traffic to the fraudulent platform.

6. Deepfake Extortion and Blackmail

Critical Risk

How Deepfake Extortion Works

Criminals create fabricated compromising videos of victims using deepfake technology and threaten to distribute them unless a ransom is paid. The victims may be individuals whose photos are publicly available on social media, or specific targets such as executives, politicians, or public figures.

Deepfake extortion -- sometimes called "sextortion" when it involves fabricated explicit content -- is one of the most personally devastating applications of deepfake technology. A criminal can take publicly available photos from a person's social media accounts and generate realistic-looking compromising video content within hours. They then contact the victim, share a preview of the fabricated content, and demand payment (typically in cryptocurrency) to prevent its distribution.

The psychological impact on victims is severe, even when they know the content is fabricated. The fear that friends, family, colleagues, or the public will see the content -- and that some people will believe it is real -- creates intense pressure to pay. Many victims do pay, which funds further criminal activity and makes them targets for repeated extortion attempts.

Teenagers and young adults are disproportionately targeted. The FBI has reported a significant increase in sextortion cases targeting minors, where social media photos of teenagers are used to generate deepfake content. The emotional vulnerability of young victims makes them more likely to comply with demands and less likely to report the crime to parents or authorities.

How to Detect Deepfakes in 2026

Deepfake Detection Checklist

Corporate and Personal Protection Strategies

For Organizations

For Individuals

Resources

Seeing Is No Longer Believing. Stay Vigilant.

Check scam.video for the latest AI-powered scam alerts. Verify before you trust.

Browse Scam Database Follow @SpunkArt13

"We spent decades building trust in video as proof of reality. AI has broken that trust in less than two years. The new rule is simple: verify everything through a channel the attacker does not control. A video call is not verification -- it is a performance." -- @SpunkArt13