Introducing Deepsight. Protect your business from deepfakes.

Introducing Deepsight 
Protect your business from deepfakes

Incode unveils Deepsight AI protection to combat deepfakes

Incode unveils Deepsight AI defence to combat deepfakes

In this article, Security Brief UK reports on Incode’s launch of Deepsight, an AI powered defense designed to detect and stop deepfake driven identity fraud. The piece explores how generative AI is accelerating impersonation attacks and why traditional identity checks are no longer enough to protect digital trust.

The article includes insights from Ricardo Amper, CEO and Founder at Incode, who explains how deepfakes are reshaping the fraud landscape and why a multi layered approach is critical to defending against AI powered threats.

Read below the transcription of this article, published by Security Brief UK on January 6, 2026.

Incode unveils Deepsight AI defence to combat deepfakes

By Kaleah Salmon, News Editor

Published January 6, 2026

Press article about Incode deepsight in Security Brief UK


Incode has launched an artificial intelligence-based defence system called Deepsight that detects and blocks deepfakes, virtual cameras and synthetic identity attacks for large organisations.

The company said Deepsight analyses multiple types of data during live identity checks and operates in under 100 milliseconds. It targets fraud in processes such as onboarding, authentication and workforce access.

Incode positions the product as part of a wider investment in AI for identity and trust. The company has started work on Agentic Identity, which links verified people with AI agents that act on their behalf.

“Deepfakes have evolved beyond novelty. They are now a major fraud weapon,” said Ricardo Amper, Founder and CEO, Incode. “When identity can be faked, everything breaks. Deepsight restores trust by ensuring every capture shows a human user in front of the camera, not a deepfake.”

Three-layer model

Deepsight uses a three-layer structure that examines behaviour, device integrity and visual signals. Each layer focuses on a different aspect of an identity check.

The behavioural layer looks for anomalies that suggest automated fraud. Incode said it can identify subtle interaction patterns that point to AI bots or coordinated fraud farms.

The integrity layer checks the authenticity of the camera and device. It blocks content from virtual cameras and other injected media sources.

The perception layer distinguishes deepfakes from genuine human users. It runs AI analysis across video, motion and depth data that a device captures during a session.

Incode said the combination of the three layers can identify specific characteristics of the generative models that produced fake content. This gives organisations more detail on the source and nature of attacks.

“Being able to tell if someone is real or not is becoming one of the defining challenges of our time,” said Roman Karachinsky, Chief Product Officer at Incode. “Deepsight has proven its effectiveness in both the lab and the real world.”

Academic benchmarking

Purdue University has evaluated Deepsight in a study of deepfake detection systems. The research, titled “Fit for Purpose? Deepfake Detection in the Real World”, compared 24 tools from commercial, government and academic providers.

Incode said it achieved the highest accuracy and the lowest false acceptance rate among commercial systems in the study. It also said its detector outperformed models from government and academic groups.

“We evaluated nine of the most widely used commercial deepfake detection systems and found that Incode’s detector achieved the highest accuracy in identifying fake samples. This outcome suggests that Incode demonstrates stronger robustness and reliability in challenging real-world scenarios”, said Shu Hu, assistant professor at the School of Applied and Creative Computing and the Director of the Purdue Machine Learning and Media Forensics (M2) Lab at Purdue University.

The company reported that Deepsight was ten times more accurate than trained human reviewers in its internal tests. It said this shows that advanced AI defence is now essential against AI-generated attacks.

Enterprise adoption

Enterprises in sectors such as social media, banking, and fintech have started using Deepsight. Incode said the system is already in deployment at TikTok, Scotiabank and Nubank.

The firm said Deepsight has protected millions of users across more than six million live identity sessions so far. It is available as part of the Incode Identity Platform, which processes identity checks across multiple industries.

Voi, a European micromobility provider, uses Incode technology in its fraud and identity systems. The company applies deepfake detection in its age and safety checks.

“With the tools available today, creating deep fakes is easily done by minors”, said Chris Hobbs, Senior Category Manager, Indirect Procurement at Voi. “Incode helps us prevent fraud and ensure the legal age and safety of our customers.”

Incode said it now works with major banks, telecom operators, fintechs, marketplaces and governments. It processes billions of identity checks each year as AI-generated fraud attempts increase.

“AI will change how we live, work, and connect,” added Amper. “Our responsibility is to make sure it does not destroy the trust that holds it all together. Deepsight is how we defend reality itself.”

More about Security Brief UK


Security Brief UK is a UK-based cybersecurity and technology news publication covering cyber risk, fraud, data protection, and enterprise security. The outlet reports on emerging threats, regulatory developments, and innovations shaping how organizations protect digital identities, systems, and users.

Read the full article here and learn more about Deepsight in this link.

Chapters

Popular Topics

Subscribe to our newsletter

The latest insights on identity verification, fraud prevention,
and digital trust.

More from the Incode Blog

Discover more articles, news and trends.