CISO’s Top 3 Concerns Regarding Deepfakes in Workforce Security

Deepfakes have rapidly evolved into a significant threat for business and individuals, especially when it comes to protecting the integrity of workforce security. Originally seen as a tool for spreading misinformation in the public sphere, deepfakes are now being leveraged within enterprises, creating new vulnerabilities for CISOs to manage.

Gartner research highlights that while presentation attacks remain the most common attack vector in identity verification, injection attacks, including AI-generated deepfakes, surged by 200% in 2023. Adding to this urgency, Deloitte forecasts that generative AI could drive fraud losses in the United States to $40 billion by 2027, a dramatic increase from $12.3 billion in 2023.

We’ve researched and consolidated the top concerns CISOs face regarding deepfakes, along with actionable insights to help organizations effectively mitigate these evolving risks.

1. Deepfakes Exploiting MFA Recovery Workflows

MFA recovery workflows often rely on voice-based authentication or knowledge-based questions, which are increasingly vulnerable to exploitation by generative AI and deepfake technology. These traditional methods, while convenient, fail to account for the sophistication of synthetic media, making them prime targets for impersonation and fraud.

For CISOs, the challenge lies in securing these workflows against sophisticated synthetic media that bypass traditional authentication measures, potentially granting attackers access to sensitive accounts.

Key impact:

  • Voice-based authentication risks: Attackers use AI to replicate vocal patterns from public recordings or intercepted audio, bypassing voice-based identity checks and enabling unauthorized access.
  • Vulnerabilities in video meetings: Deepfakes can replicate both voice and appearance, enabling a convincing presence in video calls and exploiting knowledge-based recovery workflows. This is exacerbated by:
    • Public information exposure: Social media, web content and professional networks reveal answers to common security questions.
    • Inadequate platform security: To our knowledge, no video tools today fully evaluate liveness of content, allowing pre-recorded or synthetic footage to pass as authentic.

Our recommendations:

  • Advanced liveness detection and injection protection: Implement technologies that detect signs of synthetic manipulation and block altered media to ensure only live data is processed.
  • Multi-modal authentication: Combine biometrics with contextual and behavioral signals to create a robust, layered defense against deepfake attacks.
  • Reducing reliance on static knowledge-based questions: Replace these with dynamic, contextual factors like location-based checks or usage patterns to enhance security.
  • Secure video conferencing platforms: Use solutions with anti-spoofing mechanisms and additional validation steps to protect recovery workflows conducted over video calls.

2. Threats to Corporate Internal Processes and Transactions

The fraudulent use of deepfakes goes beyond identity theft. With the ability to fabricate realistic video or audio, attackers can manipulate corporate communications, leading to potentially catastrophic consequences.

According to the FBI Internet Crime Complaint Center report in 2023, business email scams, including those enhanced by deepfakes, resulted in $2.9 billion in reported losses annually, making them the second-costliest type of cybercrime, in average $275,000 average losses per claim. Deepfakes further enable attackers to impersonate company executives and authorize fraudulent transactions, exacerbating the problem.

Deepfakes can be exploited to impersonate trusted individuals, enabling falsified agreements or unauthorized access to sensitive systems. For instance, in 2019, attackers cloned a CEO’s voice to authorize a fraudulent transfer of €220,000 at a UK-based energy company. More recently, fraudsters used deepfake video to impersonate a CFO during a video call, tricking a Hong Kong firm into transferring $25 million.

These incidents illustrate how vulnerable companies are to sophisticated social engineering attacks facilitated by deepfakes.

Key impact:

  • Financial loss: Manipulated communications can result in unauthorized transactions or contract approvals, causing significant financial damage.
  • Unauthorized access to sensitive systems and data: Impersonation through deepfakes enables attackers to infiltrate sensitive systems, exploiting trust to bypass security measures. By creating fake communications or instructions, they not only facilitate unauthorized access but also create operational disruptions that distract teams from detecting and responding to the breach, amplifying the financial and security impact.

Our recommendations:

  • Deploying AI-based detection tools and creating stricter validation procedures for sensitive communications can help reduce the risk of deepfake-enabled fraud.

3. Escalation and Security Gaps in Help Desk Operations

Deepfakes expose a critical weakness in help desk operations: the inability to reliably distinguish between legitimate requests and sophisticated impersonation attempts. As attackers use deepfakes to pass by executives or employees, help desk teams risk unknowingly granting unauthorized access to sensitive systems, enabling fraud or data breaches.

A new study reveals that 80% of companies lack protocols to handle deepfake attacks, while over 50% of business leaders admit their employees are not trained to recognize deepfake threats. This lack of preparedness highlights a significant gap in both training and technology, leaving organizations vulnerable to exploitation.

Furthermore, the 1,740% increase in deepfake fraud in North America from 2022 to 2023 underscores the growing sophistication of these attacks, where the challenge lies less in volume and more in the complexity of detection.

Our recommendations:

  • Unauthorized access: Help desk teams, untrained in identifying deepfakes, can inadvertently grant attackers access to critical systems or sensitive information, leading to significant financial and reputational damage.
  • Security vulnerabilities: The absence of detection tools and clear protocols leaves organizations exposed to escalating threats from increasingly sophisticated deepfake attacks.

Our recommendations:

  • Advanced detection technology: Equip help desk teams with AI-driven tools capable of verifying identity and detecting deepfakes, minimizing the risk of unauthorized access. These tools can analyze audio, video, and behavioral patterns to differentiate between legitimate and synthetic identity.
  • Comprehensive training and protocols: Develop tailored playbooks for handling suspected deepfake incidents, combined with employee training to recognize and escalate potential threats effectively.

Conclusion

As deepfake technology continues to improve, it is crucial for CISOs to proactively address these vulnerabilities by integrating advanced detection technologies, improving data integrity measures, and enhancing employee identity verification.

By staying ahead of these threats, organizations can better protect themselves from the significant financial, operational, and reputational impacts that deepfakes can cause.

Incode Workforce offers a robust alternative to alleviate companies in the fight against deepfakes and broader social engineering attacks, transforming employee lifecycle security with AI-driven biometric IAM enrollments, self-serve password resets, account recovery and seamless help desk interactions.

Join the conversation and contact us to learn how Incode Workforce can safeguard your organization from these evolving threats, elevate your security and streamline IAM support operations.