Table of Contents
- The Deepfake Revolution in Fraud
- CEO Fraud and Business Deepfake Calls
- Business Email Compromise Enhanced by Video
- Romance Scams Using Deepfake Video Calls
- Identity Theft Through AI Video
- Investment Scams Using Deepfake Endorsements
- Deepfake Extortion and Blackmail
- How to Detect Deepfakes in 2026
- Corporate and Personal Protection Strategies
- Resources
The Deepfake Revolution in Fraud
In February 2024, a finance worker at a multinational company in Hong Kong was tricked into transferring $25.6 million to criminal accounts after attending a video conference call where every other participant -- including the company's CFO -- was a deepfake. The employee had initial suspicions after receiving what appeared to be a phishing email, but those concerns were overridden when they saw and heard familiar colleagues on the video call. Every face, every voice, every mannerism was AI-generated. The entire meeting was fake.
This case, widely reported in global media, marked a turning point in the history of fraud. It demonstrated that AI-powered deepfakes had evolved from a theoretical threat into an operational weapon capable of stealing tens of millions of dollars in a single attack. And it was only the beginning.
By 2026, deepfake technology has become dramatically more accessible and convincing. Real-time face-swapping tools that can generate photorealistic deepfake video during a live call are available as consumer software. Voice cloning requires as little as 3 seconds of audio to produce a convincing replica of anyone's voice. The tools that were restricted to nation-state actors and well-funded criminal organizations five years ago are now available to anyone with a laptop and an internet connection.
This guide covers the six most dangerous categories of video call scams and deepfake fraud active in 2026, along with detection techniques and protection strategies for both individuals and organizations.
Video is no longer proof of identity. A person appearing on a video call who looks and sounds like someone you know may be a deepfake. Any request for money, sensitive information, or unusual actions made via video call should be verified through a separate, independently initiated communication channel before being acted upon.
1. CEO Fraud and Business Deepfake Calls
How CEO Deepfake Fraud Works
Attackers create deepfake video and audio of corporate executives and use it in video conference calls to authorize fraudulent wire transfers, change payment details, or extract confidential information. The deepfake impersonates the CEO, CFO, or other senior leader on a live call with an employee who has the authority to execute financial transactions.
CEO fraud via deepfake video calls represents the most financially devastating application of deepfake technology in the criminal landscape. The attack typically targets finance department employees, executive assistants, or treasury managers -- anyone with the ability to authorize or execute large financial transactions.
The attack follows a consistent pattern. First, the criminals gather source material: public videos of the target executive from earnings calls, conference presentations, media interviews, and social media. This footage is used to train the deepfake model on the executive's appearance, facial expressions, mannerisms, and speech patterns. Voice cloning tools process audio from the same sources to create a synthetic voice that matches the executive's tone, cadence, and accent.
Next, the criminals study the organization's structure, communication patterns, and financial processes. They identify who reports to whom, which employees can authorize payments, what communication channels are used for financial requests, and what transaction sizes are normal. This intelligence is gathered through social engineering, LinkedIn research, and sometimes through prior email compromises.
The attack itself is typically a video call initiated through the organization's standard conferencing platform -- Zoom, Microsoft Teams, or Google Meet. The employee receives a meeting invitation that appears to come from the executive. On the call, they see and hear what appears to be their CEO or CFO instructing them to make an urgent payment, often related to a plausible scenario: a confidential acquisition, a regulatory settlement, or an emergency vendor payment. The urgency, authority of the apparent requestor, and the visual confirmation of the video all combine to override the employee's normal skepticism.
The $25.6 million Hong Kong case was not unique. In 2025, deepfake CEO fraud attempts were reported across financial services, technology, manufacturing, and healthcare industries. The FBI's Internet Crime Complaint Center reported a 400% increase in business deepfake fraud reports compared to 2023. Losses from confirmed cases exceeded $200 million in the US alone.
Red Flags to Watch For
- Unusual urgency. "This must be done today" or "We cannot discuss this with anyone else" are pressure tactics designed to prevent verification.
- Request to deviate from standard processes. Any instruction to bypass normal approval workflows, use new bank accounts, or make payments to unfamiliar entities should trigger immediate suspicion.
- Video quality inconsistencies. Deepfakes in 2026 are very good, but look for: unnatural blinking patterns, slight delays between lip movement and audio, lighting inconsistencies on the face, and blurriness around the edges of the face or hair.
- Limited interaction. Deepfake operators often keep the call short and discourage questions or extended conversation, because the longer the call continues, the more likely artifacts will become apparent.
- The call is the only communication channel. If the executive has not also communicated about this matter through email, Slack, or other normal channels, it may be because the attacker only has access to the video deepfake, not the executive's other accounts.
How to Protect Your Organization
- Implement mandatory verbal verification for all financial transactions above a threshold. This means calling the executive back on a known phone number -- not the number from the meeting invitation -- to confirm the request.
- Establish code words or challenge-response protocols for high-value transactions that can only be verified in person or through a previously established secure channel.
- Train finance teams specifically on deepfake threats. Show them examples of deepfake technology so they understand what is possible.
- Require multi-person authorization for large transactions. No single individual should be able to execute a wire transfer above a certain amount without a second authorized party confirming independently.
2. Business Email Compromise Enhanced by Video
How Video-Enhanced BEC Works
Traditional Business Email Compromise (BEC) -- where attackers impersonate executives via email to request fraudulent payments -- is now augmented with deepfake video calls that confirm the fraudulent email's instructions. The video call eliminates the primary defense against BEC: the victim's instinct to verify unusual email requests by seeing or speaking with the purported sender.
BEC has been the most financially damaging category of cybercrime for years. The FBI's IC3 reported $2.9 billion in BEC losses in 2023 alone. The primary defense that organizations developed was simple: "If you receive an unusual financial request via email, verify it with a phone or video call." Deepfake technology has now undermined that defense.
The combined attack works as follows: the attacker sends a BEC email requesting a payment or wire transfer. The employee, following their training, schedules a video call to verify the request. The attacker joins the call using a deepfake of the executive who supposedly sent the email. The employee sees their "colleague" on video, hears their voice, and receives verbal confirmation of the fraudulent instructions. Every verification step the employee was trained to follow has been satisfied -- by the attacker.
This evolution makes traditional BEC defenses insufficient. Organizations that relied on "call to verify" as their primary control against BEC must now assume that video and voice verification alone cannot be trusted.
3. Romance Scams Using Deepfake Video Calls
How Deepfake Romance Scams Work
Romance scammers use real-time deepfake video during video calls to appear as the attractive persona they have presented to their victim. This overcomes the traditional defense of "insist on a video call" that was previously effective at exposing romance scammers who used stolen photos. The victim sees a face that matches the photos, eliminating their last line of defense.
Romance scams -- known as "pig butchering" when they lead to investment fraud -- have traditionally been defeated by one simple request: "Let's do a video call." Scammers using stolen photos of models or attractive individuals could not appear on video as the person in the photos. Victims who insisted on video calls could catch the deception.
Deepfake technology has eliminated this safeguard. Modern real-time face-swapping tools can transform the scammer's face into the face from the stolen photos during a live video call. The quality is now sufficient to convince most people, especially when viewed on a phone screen where resolution is limited and the emotional context (excitement about finally "seeing" the person) reduces critical evaluation.
The consequences are severe. Romance scam victims who "verified" their partner through video calls feel even more certain of the relationship's legitimacy. This deeper investment of trust leads to larger financial losses. The average romance scam loss reported to the FTC in 2025 was $64,000, but cases involving deepfake video verification show losses 2-3x higher because the victim's confidence was reinforced by what they believed was visual proof.
How to Protect Yourself
- Video calls are no longer definitive proof of identity. If you have never met someone in person, a video call does not confirm they are who they claim to be.
- Ask the person to perform spontaneous, unpredictable actions during the call: hold up a specific number of fingers, touch their ear, turn to show their profile, or hold up a piece of paper with a word you choose. Deepfakes struggle with unpredictable, rapid changes in pose and interaction with physical objects.
- Reverse image search the photos they use. If the photos belong to someone else, the relationship is fraudulent regardless of what you see on video.
- Never send money to someone you have not met in person, regardless of how convincing their video calls appear.
4. Identity Theft Through AI Video
How AI Video Identity Theft Works
Criminals use deepfake technology to impersonate real individuals during video-based identity verification processes. Banks, cryptocurrency exchanges, and other financial services that use video KYC (Know Your Customer) as a security measure are targeted by deepfakes that pass automated and human-reviewed identity checks.
As more financial services move to video-based identity verification -- where a user holds up their ID and speaks into the camera to prove they are who they claim to be -- criminals have adapted by using deepfakes to pass these checks. A deepfake of the account holder's face, combined with a high-quality replica of their ID, can pass both automated liveness detection and human review.
This enables attackers to: open bank accounts in other people's names, take over existing accounts by passing "identity verification" during support calls, apply for loans and credit cards using stolen identities, and access cryptocurrency exchange accounts. The damage compounds because the victim often does not discover the identity theft until accounts have been opened, funds have been moved, and their credit has been damaged.
Deepfake-based KYC fraud is particularly prevalent in the cryptocurrency industry, where many exchanges have adopted video verification as their primary security measure. Attackers who obtain a photo ID (through data breaches, stolen mail, or dark web purchases) can create a deepfake of the ID holder and complete the verification process to gain full access to the exchange account.
5. Investment Scams Using Deepfake Endorsements
How Deepfake Investment Scams Work
Scammers create deepfake videos of celebrities, business leaders, and financial experts endorsing fraudulent investment opportunities. These videos are distributed through social media ads, YouTube, and messaging platforms to drive victims toward Ponzi schemes, fake trading platforms, and rug pulls.
Deepfake endorsement videos have become the most effective marketing tool in the investment scam ecosystem. A 60-second deepfake of Elon Musk, Warren Buffett, or a popular financial YouTuber recommending a specific cryptocurrency, trading platform, or investment opportunity can be produced for under $100 and reach millions of potential victims through targeted social media advertising.
These videos are devastatingly effective because they exploit the authority principle -- people trust recommendations from figures they admire or perceive as credible. When a victim sees a video of a respected financial commentator enthusiastically endorsing a trading platform, they are less likely to conduct independent due diligence. The video "is" the due diligence, from their perspective.
The scale of this problem is staggering. Social media platforms struggle to remove deepfake investment ads quickly enough. A single deepfake video can be reuploaded to thousands of accounts across dozens of platforms within hours of its initial removal. By the time the content is fully suppressed, it has already been viewed millions of times and driven substantial traffic to the fraudulent platform.
6. Deepfake Extortion and Blackmail
How Deepfake Extortion Works
Criminals create fabricated compromising videos of victims using deepfake technology and threaten to distribute them unless a ransom is paid. The victims may be individuals whose photos are publicly available on social media, or specific targets such as executives, politicians, or public figures.
Deepfake extortion -- sometimes called "sextortion" when it involves fabricated explicit content -- is one of the most personally devastating applications of deepfake technology. A criminal can take publicly available photos from a person's social media accounts and generate realistic-looking compromising video content within hours. They then contact the victim, share a preview of the fabricated content, and demand payment (typically in cryptocurrency) to prevent its distribution.
The psychological impact on victims is severe, even when they know the content is fabricated. The fear that friends, family, colleagues, or the public will see the content -- and that some people will believe it is real -- creates intense pressure to pay. Many victims do pay, which funds further criminal activity and makes them targets for repeated extortion attempts.
Teenagers and young adults are disproportionately targeted. The FBI has reported a significant increase in sextortion cases targeting minors, where social media photos of teenagers are used to generate deepfake content. The emotional vulnerability of young victims makes them more likely to comply with demands and less likely to report the crime to parents or authorities.
How to Detect Deepfakes in 2026
- Watch the eyes. Deepfakes often have unnatural blinking patterns -- too frequent, too infrequent, or asymmetric. Real humans blink every 2-8 seconds with both eyes simultaneously.
- Look at the edges. The boundary between the face and the background/hair often shows subtle blurring, shimmer, or color bleeding in deepfakes. This is most visible when the subject turns their head.
- Check for lip sync accuracy. Even high-quality deepfakes can show slight delays or misalignment between lip movements and spoken audio, especially with sibilant sounds (s, sh, ch).
- Request spontaneous actions. Ask the person to turn their head to show their profile, touch their face, hold up objects, or move in ways that were not anticipated. Deepfakes handle unpredictable motion poorly.
- Watch for consistent lighting. The lighting on a deepfake face may not perfectly match the lighting in the environment. Look for shadows that fall in the wrong direction or skin tone that changes inconsistently as the head moves.
- Look at teeth and tongue. The interior of the mouth is one of the hardest areas for deepfakes to render accurately. Teeth may appear blurred, uniform, or oddly shaped.
- Check for temporal consistency. Over a long call, deepfakes may show brief "glitches" -- momentary distortions, freezes, or shifts in facial features. These are often so brief they are easy to miss unless you are specifically watching for them.
- Use deepfake detection tools. Products from companies like Microsoft (Video Authenticator), Intel (FakeCatcher), and Sensity offer automated deepfake detection capabilities, though none are 100% reliable.
Corporate and Personal Protection Strategies
For Organizations
- Implement out-of-band verification for all high-value transactions. Any financial request received via video call must be confirmed through a separate, independently initiated communication channel (a phone call to a known number, an in-person visit, or a pre-established secure messaging system).
- Establish code words for financial transactions. Pre-arrange secret phrases that change weekly and are only known to authorized personnel. Request the code word during any video call involving financial instructions.
- Deploy deepfake detection technology on video conferencing platforms. Solutions from Pindrop, Reality Defender, and other vendors can analyze video streams in real time for deepfake indicators.
- Conduct regular training. Employees should see live demonstrations of deepfake technology so they understand both its capabilities and its current limitations. Training should be updated quarterly as the technology evolves.
- Limit public video exposure of executives. Every public video of a CEO or CFO provides training data for potential deepfakes. While this cannot be eliminated, awareness of the risk can inform decisions about video content publication.
For Individuals
- Never trust video alone for identity verification. If someone you know makes an unusual request during a video call, end the call and contact them independently through a different channel.
- Limit the personal video and photos you share publicly. Every video and photo you post can be used to create a deepfake of you.
- Be skeptical of celebrity endorsement videos. Verify any investment or product recommendation through the celebrity's official accounts and channels before acting on it.
- If you are a victim of deepfake extortion, do not pay. Report it to law enforcement (FBI IC3), document the communication, and do not engage further with the criminal. Paying only encourages continued extortion.
- Stay informed about deepfake technology. Follow @SpunkArt13 and check scam.video for the latest developments in AI-powered fraud.
Resources
- scam.video -- Our video and content scam database. Watch, learn, and report deepfake scams.
- scam.ink -- The complete scam database across all categories.
- SpunkArt.com -- Privacy tools and security resources.
- FBI IC3 (ic3.gov) -- Report deepfake fraud and AI-powered scams to the FBI.
- FTC ReportFraud (reportfraud.ftc.gov) -- Report consumer-targeting deepfake scams.
- StopNCII.org -- If non-consensual intimate deepfakes of you are circulating, this organization can help with removal.
- Microsoft Video Authenticator -- Tool for analyzing video content for deepfake manipulation.
Seeing Is No Longer Believing. Stay Vigilant.
Check scam.video for the latest AI-powered scam alerts. Verify before you trust.
Browse Scam Database Follow @SpunkArt13"We spent decades building trust in video as proof of reality. AI has broken that trust in less than two years. The new rule is simple: verify everything through a channel the attacker does not control. A video call is not verification -- it is a performance." -- @SpunkArt13