Purdue University built a real-world deepfakes benchmark to stress-test academic, government, and commercial tools on a social-media content dataset.
In that head-to-head, Incode delivered the lowest false-acceptance rate (FAR) on images (2.56%) and the best video accuracy among commercial tools (77.27%), with video FAR at 10.53%, a rare balance of catch rate and precision that minimizes false positives and operational friction.
Purdue also finds that paid/commercial detectors generally outperform free-access models, underscoring the importance of continuously maintained systems.
Combine those results with Deepsight’s behavioral and integrity layers, and you get the world’s most accurate deepfake detection. While the Purdue benchmark focuses on generalized deepfakes, in identity verification specifically, the gap in performance is much higher.
In internal testing for across millions of real IDV sessions, Deepsight is proven to have 68x lower false acceptable rate than the next-best commercial solution and is 10x better at identifying deepfakes than expert human reviewers.
What Purdue University Tested and Why It Matters
Traditional deepfake datasets are built under lab conditions (clean, frontal faces, controlled lighting) and don’t reflect real-world scenarios, which include heavy compression, sub-720p resolution, post-processing, and heterogeneous generation pipelines.
Purdue’s Political Deepfakes Incident Database (PDID) explicitly targets real incidents from platforms like X/Twitter, YouTube, TikTok, and Instagram, bringing those artifacts into the evaluation.
Purdue curated 232 images and 173 videos and evaluated detectors end-to-end using a common methodology (ACC, AUC, FAR). That mix includes low-resolution content and short, social-media-style clips, which are notoriously challenging in production settings.
The study compares white-box academic and government models, commercial black-box tools, and LVLMs, with the explicit goal of exposing the limits of detectors when moved from lab datasets to political content circulating “in the wild.”
Where Incode Deepsight Stood Out
- Lowest false positives on images
Incode achieved 91.07% image accuracy with the lowest image FAR, 2.56%, “reflecting a well-calibrated decision boundary that minimizes false positives,” a critical driver of low-friction operations. - Best commercial video accuracy with strong precision
On video, Incode posted 77.27% accuracy and 10.53% FAR, the top accuracy among commercial tools evaluated in the benchmark. - Competitive balance across modalities
Purdue’s Table 4 shows Incode with second-best image accuracy in the commercial set and leading video accuracy, signaling one of the strongest overall trade-offs between catching deepfakes and avoiding false rejections of genuine content. - Industry vs. free model
The paper’s findings highlight that commercial detectors generally outperform free-access counterparts, likely due to ongoing updates and broader data exposure consistent with Incode’s real-world performance. - Out-of-domain resilience
Purdue notes Incode is a face-oriented detector designed for identity verification, not politics, and yet it still leads on this political benchmark. That result speaks to robustness when the content distribution shifts.
From Detection to Defense. How Deepsight Combines Deepfake Protection with Closing the Injection Gap
The attack surface for deepfakes is large. Attackers can inject manipulated content via virtual cameras, rooted/jailbroken devices, or emulators during the verification process. Deepsight was built to holistically secure the entire path from device to decision:
- Behavioral Layer
Flags behavioral risk signals indicating farm-like patterns and non-human or bot-like user behavior typical of automation.
- Integrity Layer
Ensures integrity at the time of capture, blocking tampered devices and spoofed or virtual cameras trying to bypass traditional IDV systems.
- Perception Layer
Detects deepfakes by using a large multi‑modal AI (video, motion, and depth) that identifies cross‑modal inconsistencies typical of Gen AI tools with >99% digital deepfake detection
Deepsight’s holistic architecture also fingerprints generator “styles” (UMAP clustering), strengthening attribution signals even when a fake looks visually perfect.
As we can see by the color-coded clusters, each synthetic image generation tool has a unique UMAP profile that Deepsight’s AI can recognize.
In other words, it means that even if FaceStudio (the purple cluster at [5,5]) would suddenly generate a completely convincing deepfake, its clear fingerprint would indicate significant likelihood that the image originated from this tool – and is very probably a deepfake, regardless of how perfect it looks on the surface.
Why the Pairing of Deepfake Defense with Injection Prevention Matters
Purdue validates Incode’s precision on real political media. Deepsight wraps that precision in behavioral and integrity layer defenses that block injection before deepfakes enter the verification flow, adding layers of protection by flagging deepfake-relevant risk signals upstream.
This holistic approach at protecting against deepfakes translates to market leading real world performance. This aligns with Incode’s own findings: measured across 1.4M real-world sessions in H2 2025, Deepsight caught 24,360 additional fraudulent sessions, which translates to a significant reduction in fraud that no other system or human review could have identified.
What This Means for Buyers and Operators
1) Lower friction from fewer false alarms
FAR governs escalations, false flags, and abandonment. Incode’s industry-low image FAR (2.56%) and measured video FAR (10.53%) translate to fewer manual reviews and smoother user journeys at scale.
2) Coverage where attacks are moving
PDID captures short, low-res, heavily processed, social-media content, the same conditions that increasingly show up in fraud pipelines. Performing well there is a proxy for real-world readiness.
3) End-to-end mitigation, not just detection
Deepsight stops virtual-camera streams, injected content, and tampered devices upstream, while multi-frame liveness and MMI catch deepfake artifacts downstream. One system, one decision.
4) Independent context for decision-makers
Purdue’s study was designed to “test the corners of the box”, exposing where white-box and black-box detectors struggle in the wild, and it documents the need for careful thresholding and calibration in practice. Your teams get transparent third-party evidence for vendor selection.
5) Built for production complexity
Purdue also shows that commercial detectors generally outperform free-access models and that political video detection remains harder than images, which is exactly the realities Deepsight was engineered to handle.
The Bottom Line
- Independent validation. Purdue’s real-world political deepfakes benchmark shows Incode with the lowest image FAR (2.56%), best commercial video accuracy (77.27%), and 10.53% video FAR, while emphasizing why detectors must generalize beyond lab data.
- Product leadership. Deepsight extends strong deepfake detection into holistic fraud prevention by combining Behavioral, Integrity and Perception Layer defenses to deliver >99% digital deepfake detection performance with no extra user steps.
In identity verification focused real-world sessions, Deepsight has 68x lower false acceptable rate than the next-best commercial solution.
If you’re evaluating deepfake and injection defenses for KYC, payments, media content, or platform integrity, Deepsight pairs measured precision with full-stack prevention, designed for executive outcomes: lower fraud losses, lower friction, and higher conversion.
If you want to read the complete Purdue University study, please click here.
Incode was named a Leader in the 2025 Gartner® Magic Quadrant™ for Identity Verification. Download the report.