Beyond IAM: 3 Identity Risks Most IAM Platforms Miss
Your IAM platform manages thousands of identities and access requests daily. But what happens when the biggest threat isn’t a system breach, but a perfectly legitimate-looking employee who doesn’t actually exist?
IAM tools like Okta, Microsoft Entra ID, and Ping Identity are essential, but attackers have evolved. Deepfakes, stolen identities, and synthetic personas now exploit gaps that IAM was never designed to cover.
Let’s explore three critical blind spots and how identity-centric security can help fill them.
1. Synthetic Employees in Your Directory
In 2023, a finance employee in Hong Kong transferred $25 million after a Microsoft Teams call with what appeared to be her CFO. The attackers used deepfake tech to mimic both the CFO’s face and voice convincingly enough to fool her (Source: CFO Dive).
And in a 2024 U.S. Department of Justice case, North Korean operatives used deepfakes and stolen identities to land remote jobs at over 300 U.S. companies. AI-generated videos and basic ID checks were enough to get through onboarding (Source: CBS News).
IAM manages access after identity is established. But if a fake identity gets into your directory, IAM will grant them access like any other employee. Most checks validate credentials, not the actual person. An attacker can steal someone’s identity, claim their work history, and pass screening, but that does not make them the real individual.
The fix: Use identity verification during onboarding that includes a biometric check paired with a government-issued ID to ensure every new hire is a live, present human tied to a real document, not a synthetic imposter or deepfake.
2. Deepfake Impersonation at Critical Moments
Help desks often verify users via voice, chat, or video. All of these channels are now vulnerable to deepfakes. A VMware survey found 66 percent of security leaders have already encountered deepfake-based attacks (Source: Dark Reading).
Attackers can now spoof live video calls, replicate facial movements, or use pre-recorded footage to bypass human verification with help desk personnel. This is especially dangerous during high-trust actions like password resets, device reactivations, or off-boarding approvals.
The fix: Add liveness detection and facial biometrics at critical moments. Multilayered liveness checks with 3D depth-perception analyze whether the person is physically present. Best solutions also thoroughly analyze devices and detect signals that indicate pre-recorded or injected content, which is a common deepfake tactic.
3. Credential Abuse That Your IAM Can’t See
According to Microsoft, credential-based attacks rose 74 percent year over year, with 921 password attacks every second (Source: Microsoft Security Brief). Attackers often gain valid credentials through phishing, social engineering, or SIM swaps, where attackers convince a mobile carrier to transfer a user’s phone number to a new SIM they control. This gives them access to text message-based two-factor codes and account recovery links. Others hijack trusted sessions or trick IT support into resetting credentials and MFA.
IAM systems log these events as normal activity. They show successful logins from trusted devices, even when the real user is nowhere near the account. Without verifying the person behind the action, shared or stolen credentials are indistinguishable from legitimate use.
The fix: Implement biometric step-ups for sensitive operations. A quick face scan with strong liveness technology can confirm that the person taking the action is actually the authorized user.
Next Steps for IAM Leaders
IAM tools manage credentials, policies, and access, but not the human behind them. A biometric layer adds real-time identity verification and closes gaps that deepfakes and impostors now exploit. These steps complement device-based authentication to ensure you are verifying the person, not just the device.
Immediate Actions:
- Audit your onboarding verification process Are you really verifying the identity of the individual tied to the account?
- Test your help desk identity verification processes Are they resistant to deepfake or impersonation attempts?
- Pilot biometric step-up for high-risk operations Add facial recognition check to confirm that the right person is taking action, not just someone in control of a trusted device.
Want to see how liveness detection works in real time? Book a quick walkthrough with our identity team.
Author
Carrie Melanda is a Product Marketing Manager at Incode. She brings deep expertise in product marketing and go-to-market strategy, with a proven track record of driving revenue growth and market differentiation in the cybersecurity space.
Additional Resources:
- North Korea duped U.S. companies in tech-worker scheme to fund weapons program, Justice Department says. CBS News.
- Scammers siphon $25M from engineering firm Arup via AI deepfake ‘CFO’. CFO Dive.
- Reshaping the Threat Landscape: Deepfake Cyberattacks Are Here. Dark Reading.
- Microsoft report finds 74% increase in password attacks. SecurityBrief.