In this TechCrunch article we can explore how deepfake attacks are escalating and why enterprises must strengthen their AI defenses as generative AI lowers the cost and complexity of identity fraud.
The piece highlights how Incode is addressing these threats through Deepsight, its most advanced defense against deepfake and synthetic identity attacks, designed to protect digital trust as fraud tactics evolve faster than traditional identity checks.
Read the transcription of this article, published by TechCrunch on January 21, 2026.
Deepfake Attacks Are Escalating – Enterprises Must Strengthen AI Defenses
Deepfakes have evolved far beyond internet curiosities. Today, they are a potent tool for cybercriminals, enabling sophisticated fraud across onboarding, account recovery, partner verification and employee authentication. In 2025 alone, deepfake attacks are estimated to have cost organizations as much as $1.5 billion, illustrating the high stakes of AI-driven impersonation in business operations.
As AI-generated content becomes more realistic and accessible, threat actors are weaponizing it with increasing precision. Three-quarters (72%) of business leaders anticipate AI-generated fraud, including deepfakes, will be a top operational challenge in 2026. Enterprises must urgently find ways to separate reality from fiction before these attacks compromise trust, revenue, and operational continuity.
“Deepfakes have moved well beyond novelty, they’ve become a serious fraud weapon,” said Ricardo Amper, Founder and CEO of Incode Technologies. “When identity itself can be convincingly faked, trust collapses across every digital interaction. The challenge now is proving that there’s a real human behind the camera, not a synthetic one, before that trust is exploited.”
Data reflects this acceleration. Nearly half (46%) of businesses surveyed in 2025 reported an annual increase in deepfake and generative AI fraud. Since 2023, deepfake-driven fraud attempts have doubled in banking, surged sixfold in payments, and increased sevenfold in gig work – showing how quickly these attacks are scaling across digital-first industries.
Beyond Visual Detection: The Real Risk
Traditional deepfake detection has largely focused on what’s visible in the frame – facial artifacts, unnatural motion, and other visual anomalies. That still matters, more than ever with the introduction of the latest models that create hyper-realistic videos and images, but it’s no longer enough on its own because attackers aren’t only manipulating the face – they’re manipulating the delivery path.
Increasingly, cybercriminals use injection-style tactics to feed pre-recorded or synthesized video into an authentication flow through virtual cameras, emulated devices, or compromised endpoints. In those scenarios, the system may be analyzing a stream that never came from a real, live camera capture in the first place – which can undermine standard verification steps.
That’s why modern defenses need to be holistic: validate the integrity of the device and camera pipeline, detect signs of stream substitution or tampering, and still inspect the video itself for manipulation. Treating the content and the capture path as a single threat surface is what closes the gap between “spotting a fake face” and stopping an attacker who can inject a fake feed.
The risks are significant:
- Account takeover of existing profiles, giving attackers unauthorized access to sensitive data.
- Synthetic identities created to open fraudulent accounts, enabling money laundering or financial fraud.
- HR verification bypass, allowing fake applicants to infiltrate workflows and potentially access sensitive information or roles.
Even trained experts are starting to struggle with the latest deepfakes. As generative models improve, the tells are subtler, and manual review becomes a grind – fatigue sets in after hundreds of decisions, and “high confidence” can turn into guesswork. And as we’ve seen with injection techniques, even a video that looks completely legitimate may not reflect what a real camera is capturing in the first place. Even when multiple reviewers evaluate the same content and reach a consensus, that consensus isn’t a guarantee of truth anymore. These challenges highlight the limitations of human detection and underscore the growing need for automated, end-to-end defenses that can keep pace with the latest deepfake attacks.
Introducing Incode Deepsight
Incode Deepsight is an AI-driven system designed to detect and stop deepfakes, injection attacks, and synthetic identity fraud – all without disrupting legitimate users. Unlike single-signal solutions that rely solely on visual anomalies, Deepsight evaluates multiple layers of activity across each session, preventing attacks before they reach verification flows.
- Perception Layer: Multi-modal AI analyzes motion, depth, and video frames to detect subtle inconsistencies while identifying fingerprints left by generative AI tools.
- Behavioral Layer: Monitors user activity for suspicious patterns, including bot-like behavior, rapid repetition, or unusual interactions.
- Integrity Layer: Verifies device and video feed integrity, detecting rooting, emulation, tampering, virtual cameras, or manipulated streams.
Together, these layers cross-validate activity to create a holistic, end-to-end defense against increasingly sophisticated attacks.
Validated in Real-World Testing
Independent testing at Purdue University confirms Deepsight’s effectiveness under real-world conditions. Purdue’s Political Deepfakes Incident Database (PDID) benchmark replicates the low-resolution, compressed, and socially circulated content enterprises encounter daily on platforms such as X/Twitter, YouTube, TikTok, and Instagram.
In these evaluations, Deepsight:
- Achieved the lowest image false-acceptance rate (FAR) at 2.56%, minimizing false positives.
- Delivered the highest video accuracy among commercial solutions at 77.27%, with a video FAR of 10.53%.
- Performed exceptionally well even on political media, despite being designed for identity verification, demonstrating robustness across domains.
Internal testing across 1.4 million enterprise verification sessions reinforced these results: Deepsight achieved a 68× lower false-acceptance rate than the next-best commercial solution, and prevented tens of thousands of fraudulent sessions – demonstrating strong, low-friction protection against AI-driven fraud.
Why Multi-Layered Detection Matters
Modern deepfake attacks exploit entire systems, not just visual content, making single-layer detection inadequate. Deepsight’s multi-layered approach – assessing perception, behavioral, and device/video integrity – ensures even sophisticated, multi-vector attacks are caught before compromising identity verification. Each layer reinforces the others, enabling enterprises to scale security alongside business growth without sacrificing usability, conversion, or customer experience.
Staying Ahead of Deepfake Fraud
As deepfake and synthetic identity attacks rise, enterprises cannot afford to treat detection as optional. Multi-layer AI defenses like Incode Deepsight prevent fraud, reduce false positives, maintain smooth user experiences, and preserve trust in digital interactions. With independent validation and proven ability to catch attacks that human reviewers and single-signal systems miss, Deepsight equips enterprises to stay one step ahead of threat actors.
More about TechCrunch
TechCrunch is one of the world’s leading technology publications, known for its in-depth reporting on startups, AI breakthroughs, cybersecurity, enterprise innovation and emerging tech markets. Through analytical features, industry interviews and trend coverage, the outlet provides founders, investors and global leaders with authoritative insight into the forces shaping the future of technology and digital security.
Read the full article here and learn more about Incode’s Deepsight here.