
Imagine this: It’s Monday morning. You’re on a video call with your company’s finance team and the Regional Director. Everyone’s in their usual virtual boxes, nodding along. The Director looks a little grainy, perhaps due to some poor Wi-Fi.
Then they say:
“We need to transfer $25,000 right now. This is confidential and time-sensitive.”
Would you hesitate? Probably not. After all, you see them. You hear them. They’re real, right?
Unfortunately, for one employee at a global engineering giant, that assumption led to one of the most jaw-dropping (and unintentionally comedic) AI cybercrime stories of the year.
The Deepfake Dilemma: When AI Meets Social Engineering
In early 2025, local police revealed that an employee had been tricked into transferring roughly $25 million after attending a video call with what appeared to be the company’s Chief Financial Officer (CFO) and other colleagues.
Except none of them were real.
Criminals had used AI-generated deepfake technology to clone the CFO’s face and voice, creating a shockingly convincing illusion. To make matters worse, the scammers even faked the background participants: multiple AI-generated coworkers nodding silently in approval.
The result? A Hollywood-level scam pulled off with just a laptop, some deepfake software, and an inbox full of confidence.
The Funny but Teachable Side of the Story
The employee didn’t fall for any scam involving a “Nigerian Prince” or a sketchy notice from “FedEx Support.” They fell for their boss on Zoom.
It’s absurd, but also a symptom of how believable digital human beings have become. Breach Secure Now (BSN) Chief Executive Officer, Art Gross, even has his own AI-likeness in company material known as Artemis; an AI clone used in promotional and teaching materials.
The Real Lesson: Verify Before You Trust
We’re entering an era where “seeing is no longer believing.” Here are some key lessons to ensure you’re not scammed via AI likeness:
- Always verify sensitive requests through a second channel
-
- If your “boss” asks you to wire millions of dollars, pause and contact them on a known number.
- Train employees on deepfake awareness
-
- Many AI-cybersecurity programs still focus on outdated concepts like phishing emails. Video and voice verification need to be part of security awareness as well.
- Use multi-factor authentication (MFA) for critical transactions
-
- Multi-level approvals, unique verification codes, or secondary sign-offs can stop fraud before it impacts the bank.
- Leverage AI to work for you defensively
-
- There are now AI tools that detect facial artifacts, lip-sync errors, or voice-modulation inconsistencies in real-time. Organizations should adopt these technologies.
The Human Firewall Still Matters Most
What makes this story so powerful is that it’s not about any sort of code vulnerability, it’s about trust. AI didn’t break into any firewall, it broke a person’s confidence in what they saw and interacted with. That’s why the next frontier in cybersecurity isn’t just about building stronger technology; it’s about training smarter humans.
Deepfakes aren’t going anywhere, and honestly, neither is human error. When people know what to look for and how to pause before they click, they can stop threats that even the best technology might miss.
That’s why ongoing cybersecurity training isn’t just helpful, it’s essential. It keeps awareness fresh and turns everyday employees into confident, informed defenders.
If you haven’t made training part of your routine yet, now’s the time. Because the best protection starts with people.
Now Available: Gen AI Certification From BSN
Lead Strategic AI Conversations with Confidence
Breach Secure Now’s Generative AI Certification helps MSPs simplify the AI conversation, enabling clients to unlock the value of gen AI for their business, build trust, and drive growth – positioning you as a leader in the AI space.