The New Threat Surface: How Hiring Became a Vector for Attack
The hiring process has rapidly emerged as a critical attack surface for cybercriminals and nation-state actors. Organizations are now facing fake job candidates at scale, complete with AI-generated resumes, online profiles, and even synthetic faces or voices.
According to Gartner, by 2028, one in four job candidates will be fake, driven by generative AI and fraudulent identities. What was once a rare anomaly has become a systemic risk fueled by easy access to breached personal data and ever-more convincing deepfakes.
Until recently, most companies focused all of their efforts and resources on defending their systems from external cyberattacks. However, the greatest threat can now come from a hacker gaining insider access by applying for a job and slipping in through the front door—sophisticated actors are exploiting the inherent trust in the hiring pipeline, impersonating legitimate applicants and becoming insider threats.
“By 2028, one in four job candidates will be fake.” – Garner
The rise in remote work and digital recruiting opened vast opportunities for businesses, while also introducing significant vulnerabilities and widening the attack surface. Today, candidates are rarely met in person and geographic hiring constraints have weakened.
Add to that the rise of generative AI and you have a combination that has created ideal conditions for abuse. Deepfake technology allows a person’s image, voice, or video to be faked in real time, at low cost.
At the same time, a global talent shortage—especially in IT and security—pressures hiring managers to fill roles quickly. That urgency often leads to overlooking warning signs in a candidate’s background. As a result, hiring has become a recognized vulnerability in the security chain.
Even well-resourced tech firms have been blindsided. A recent report revealed Fortune 500 companies unknowingly hired North Korean operatives as remote IT workers, who then funneled their salaries to Pyongyang’s weapons program.
The incentive for attackers is clear. Securing a job under false pretenses can yield direct financial gain through high salaries, or to privileged access that enables espionage and fraud.
HR teams and CISOs are encountering strange situations, such as: interviews where the video feed seems glitchy; eyes that don’t move naturally; voices that are slightly out of sync with lip movements. These signs suggest an unsettling possibility: the candidate may not be real.
For CHROs, this means HR’s responsibilities now include protecting against identity fraud. For CISOs, it means the security perimeter must start at the point of application. Both roles must work together to confront this evolving threat before a fake hire leads to a real breach or compliance failure.
Real-World Incidents: From Deepfake Interviews to Nation-State Infiltrators
These threats aren’t theoretical—they’re unfolding in headlines and government briefings. Perhaps the most alarming example is the North Korean remote worker infiltration scheme that has come to light in the past two years. According to threat intelligence experts, this scheme is “happening on a scale we haven’t seen before”.
The tactics used by the DPRK operatives were a masterclass in candidate fraud. They would steal or buy real Americans’ personal data (like addresses and SSNs) to fabricate credible applicant profiles. One cybersecurity firm found 1,000+ job applications linked to the North Korean program, often for developer and engineering roles.
In one FBI, State, and Treasury Department advisory, officials noted each skilled IT worker could earn up to $300,000 a year for North Korea. A recently unsealed case showed the breadth of the scam: one American facilitator pled guilty after helping run a laptop farm that supported North Korean hires at over 300 different U.S. companies, generating $17 million in illicit earnings.
Beyond the North Korean example, we’ve also seen domestic fraud rings use fake candidate schemes for monetary gain. In one recent case, scammers created synthetic identities to apply for customer support roles at banks, aiming to gain access to client accounts.
Other incidents involve legitimate job seekers cheating: e.g. paying someone to pose as them in a coding test or interview, raising questions about who actually reports to work. While such cases are often quietly resolved (the fake hire is fired when discovered), they illustrate how traditional hiring checks can be bypassed by determined fraudsters.
The fallout from these incidents is significant. Companies that fell victim faced regulatory scrutiny (for example, violating U.S. sanctions by employing a sanctioned entity), financial losses, and serious embarrassment.
The U.S. government has explicitly urged companies to tighten hiring checks due to national security concerns—a joint advisory in 2022 from the FBI and other agencies warned that North Korean IT workers were actively exploiting U.S. hiring pipelines, sometimes even planting malware once inside networks.
In another high-profile alert, the New York State Department of Financial Services cautioned financial institutions about the risk of inadvertently hiring foreign state-sponsored hackers as contractors
For HR leaders, these real-world cases underscore that candidate fraud isn’t a hypothetical risk—it’s already here. And for security leaders, they highlight that insiders with falsified identities can be just as dangerous as outside hackers, if not more so.
This is an excerpt from our e-book, “Securing the Hiring Process Against Deepfakes and Identity Fraud”, by Fernanda Sottil, Head of Workforce at Incode. Download your complimentary copy of our e-book, “Securing the Hiring Process Against Deepfakes and Identity Fraud”, to explore:
- The anatomy of a modern candidate fraud attempt
- Why traditional hiring processes are susceptible to fraud
- The risks organizations face in today’s talent landscape
- Best practices for building a resilient hiring pipeline
- Key criteria for evaluating a candidate verification solution
- Why leading enterprises trust Incode for identity assurance
Author
Fernanda Sottil leads the strategic direction and growth of Incode Workforce, which offers secure, device-independent biometric authentication for enterprises, integrating seamlessly with existing IAM systems to address critical employee interactions.