Anatomy of a Modern Candidate Fraud Attempt
In our previous blog, The New Threat Surface: How Hiring Became a Vector for Attack, we looked at how the hiring process has rapidly become a prime target for cybercriminals and nation-state actors.
Today, organizations are contending with an influx of fake job candidates armed with AI-generated resumes, fabricated online profiles, and facial deepfakes, making recruitment a significant and evolving security vulnerability and candidate verification an essential component of today’s business infrastructure.
Understanding how these schemes operate is essential to stopping them. Whether they come from independent scammers or state-sponsored groups, today’s fraudulent hiring attempts follow a predictable pattern.
1. Fabricating a Fake Identity
Fraudsters often begin by assembling either a stolen or synthetic identity. They may gather personal details from breached databases to construct a “new” person, combining real names, Social Security numbers, and addresses with fake contact details. Others impersonate actual individuals, using stolen credentials and forged identification. Professional profiles on platforms like LinkedIn are easily faked or purchased.
In one observed scheme, North Korean operatives created multiple false identities, complete with fabricated work histories, AI-generated profile photos, and detailed backstories. These personas maintained online footprints that appeared credible to recruiters.
2. Applying and Screening Stage
Once equipped with a believable persona, the fraudster applies to numerous job openings, especially for remote positions in tech or finance. These roles often offer high pay and access to sensitive data. Fake credentials, such as degrees or employment history, are common. Some applicants even list fake companies, using accomplices to pose as references.
Generative AI makes it easy to produce polished application materials. Nearly 40 percent of real candidates now admit to using AI for parts of the hiring process. Malicious actors go further, fabricating addresses to qualify for higher remote pay, producing counterfeit documents, and digitally altering their appearance in interviews.
During screening, such as one-way video interviews, a fraudster may use pre-recorded answers or AI avatars. These videos are designed to be convincing, front-facing, and well-lit, but the person on screen may not exist at all.
3. The Live Interview: Impersonation via Deepfake
If the fake candidate passes HR screening, they move on to live interviews, where deception becomes even more sophisticated. Deepfake video and audio technology may be used in real time to impersonate a stolen or fabricated identity.
Reports submitted to the FBI describe calls where the interviewee’s voice was digitally altered, and their video feed was a manipulated image with misaligned audio. A lack of blinking, robotic expressions, or unnatural behavior are all telltale signs. One cybersecurity manager became suspicious when a candidate’s eyes never moved and their answers seemed oddly mechanical.
In some cases, multiple people work together to deceive interviewers. One person may appear on camera while another answers questions off-screen or completes coding tests. These tactics are aimed at gaining the interviewer’s trust and securing a job offer. North Korean operatives have used AI-generated personas during interviews to mimic stolen identities and successfully deceive hiring teams.
4. Credential Checks and Onboarding
Once the fraudster receives an offer, they enter the background check and onboarding phase. Traditional defenses often fall short here. Most background checks depend on the data provided by the candidate, which may be stolen or fabricated.
A fraudulent identity with no criminal record raises no red flags. In some cases, the check may mistakenly validate the real person’s clean record. Even government-required employment verification, such as the I-9 process, can be bypassed with high-quality fake documents or proxy actors. The FBI has reported cases where identity theft was only discovered after onboarding, when investigators found that the provided Social Security number belonged to someone else.
In the North Korean scheme, American accomplices posed as workers during ID verification or received company hardware, adding yet another layer of deception.
5. Day One and Beyond: Persistence
With the fraudster now considered an official employee, the risk escalates. For remote roles, companies often send laptops or grant VPN credentials. Sophisticated operations are ready for this. North Korean fraud rings maintained U.S.-based “laptop farms” where dozens of company-issued computers were operated by co-conspirators who relayed access overseas.
Some fraudsters use residential VPNs to spoof location or simply log in from abroad if there are no geo-restrictions. Once inside, the fake employee may minimize video contact to avoid detection. They may hold multiple jobs simultaneously under different identities or escalate access privileges over time. They may do just enough to stay employed, or in more malicious scenarios, begin extracting sensitive information.
6. The Endgame: Fraud, Theft, or Exposure
Motivations vary. Some impostors are financially motivated, funneling salaries to sanctioned regimes or earning under false identities. Others are focused on espionage or corporate sabotage.
The FBI warns that some impostor employees install malware or backdoors, preparing for future cyberattacks or extortion. In one case, North Korean IT contractors attempted to retain long-term access to networks for future data theft or ransom demands.
Eventually, many of these frauds collapse. A vigilant manager notices discrepancies, or audits reveal inconsistencies. When confronted, fake hires typically disappear. However, by that time, the damage is usually done, and it can be extensive.
This is an excerpt from our e-book, “Securing the Hiring Process Against Deepfakes and Identity Fraud”, by Fernanda Sottil, Head of Workforce at Incode. Download your complimentary copy of our e-book, “Securing the Hiring Process Against Deepfakes and Identity Fraud”, to explore:
- The new threat surface: how hiring became a vector for attack
- Why traditional hiring processes are susceptible to fraud
- The risks organizations face in today’s talent landscape
- Best practices for building a resilient hiring pipeline
- Key criteria for evaluating a candidate verification solution
- Why leading enterprises trust Incode for identity assurance