
Deepfakes are no longer limited to content manipulation. They are increasingly reshaping how identity fraud is executed across onboarding, authentication, and account recovery flows.
The scale of the problem is already measurable. According to figures referenced during the discussion, the World Economic Forum recorded more than $200 million in deepfake-related fraud losses in just the first quarter of 2025 alone. Deloitte estimates that AI-driven fraud in the United States could approach $40 billion. At the same time, attackers can now generate hundreds or even tens of thousands of synthetic identities in real time, often with minimal expertise.
Taken together, these signals point to a shift in both the economics and the mechanics of fraud: lower barriers to entry, higher scalability, and faster iteration cycles for attackers.
In this live discussion, our panel examines what current detection data reveals about attacker behavior, reuse patterns, and the structural shift happening as AI-generated identities become more repeatable and economically viable.
The most important insight from our Deepfakes in 2026: Why Advanced AI Is Becoming an Existential Risk in Fraud discussion centered on how identity fraud is evolving at scale. As advanced AI tools become more accessible, identity fraud becomes more scalable, more systematic, and harder to evaluate using static signals alone.
Our panel explores what this shift means in practice and how leaders should reassess identity risk, system design, and long-term strategy in response.
Discover more articles, news and trends.