In this article, VMblog.com features Incode’s 2026 predictions on deepfakes, synthetic identities, and the future of digital trust as part of its annual industry forecast series. The piece examines how generative AI, autonomous agents, and malleable digital identities are reshaping trust, accountability, and risk across the internet.
The article is authored by Ricardo Amper, Founder and CEO of Incode, who outlines why identity must evolve from a one-time checkpoint into a living signal that enables speed without sacrificing accountability.
Read the transcript below.
Incode 2026 Predictions: Deepfakes, Synthetic Identities, and the Future of Digital Trust
By Ricardo Amper, Founder and CEO at Incode
Published January 13, 2026
As 2025 closes, we are entering a new phase of the internet – one where the default assumption shifts from “this is probably real” to “this could be generated.” Not because people suddenly became more deceptive, but because the tools to manufacture convincing content have become ordinary. What used to require studios, specialist talent, or serious budgets now fits inside everyday workflows. And that changes the emotional physics of trust.
Identity Moves From a Gate to an Accountability Layer
For a long time, digital identity was treated like a gate you pass through. You sign up, you verify, you move on. In 2026, identity stops being a one-time checkpoint and becomes the accountability layer behind almost every meaningful interaction. The question is no longer “can this person get in?” It becomes “what is this entity allowed to do – and who is responsible for it when things go wrong?”
This is where deepfakes and synthetic personas stop being a novelty. In the best case, synthetic personas become a new form of creativity, self-expression, and privacy. People will experiment with identity the way they once experimented with usernames and avatars, except with far higher realism and far higher stakes. In the worst case, the same realism makes deception cheap, scalable, and repeatable. When a face, a voice, and a story can be generated on demand, it becomes easier to impersonate trust than to earn it.
The most important shift may not be what we see on social media, but what happens in the background. Fraud does not need to be loud anymore. It can be patient. It can be engineered. It can blend real signals with synthetic ones until the difference becomes hard to spot with the kinds of checks most organizations still rely on. The result is a world where “looking legitimate” becomes a commodity.
The Next Wave: AI Agents
In 2026, more people will delegate real actions to software that acts on their behalf. Not just drafting emails or summarizing documents, but moving money, shopping, booking travel, negotiating subscriptions, opening accounts, requesting access to data, connecting to services, and executing workflows inside companies. These agents will be useful precisely because they remove friction. But that convenience creates a new trust problem: if an agent can act, it can also be misdirected, hijacked, or impersonated.
We will need a simple way to answer a new set of questions:
- Who owns this agent?
- Who authorized it?
- What can it do?
- What can it never do?
- When should it be required to prove who it is acting for?
This is not just a security concern. It is a product concern. A policy concern. A societal concern. If we get it wrong, we will live in a world of constant doubt, where every interaction feels provisional. If we get it right, we can preserve speed without sacrificing accountability.
Designing Trust for a Malleable Reality
The solution cannot be constant interruptions. Nobody will tolerate stopping for minutes at a time to prove they are real, dozens of times a day. Trust has to be designed to be seamless – mostly invisible when risk is low, and unmistakably strong when the consequences are high. In practice, that means tying digital actions back to a real human, quickly and reliably, and escalating friction only when the situation demands it. Identity becomes a living signal, not a static certificate.
This is the tipping point: reality is becoming more malleable, and the incentives for abuse are obvious. The organizations that treat trust as a core capability – not a compliance checkbox – will be the ones that can move fast in the new year without losing control. The ones that don’t will spend the year reacting to problems that feel new, but are really just the predictable outcome of an internet where “proof” got easier to fake than ever before.
More about VMblog.com
VMblog.com is a long-running enterprise technology publication covering virtualization, cloud computing, cybersecurity, digital infrastructure, and emerging IT trends. Through expert commentary and an annual prediction series, the outlet highlights how innovation is reshaping enterprise technology, security, and digital trust.
Read the full article here.
Connect with Ricardo Amper here.