Introducing Deepsight. Protect your business from deepfakes.

Introducing Deepsight 
Protect your business from deepfakes

Deepfake Hiring Fraud: Incode on the Security You Should Know Podcast

CISOs face deepfake candidates and AI hiring fraud. Learn how Incode adds real-world identity verification to protect workforce access and MFA recovery.
Verifying Identity In The Age Of Deepfakes: Incode On Security You Should Know

Identity systems were built for passwords and plastic IDs, not for a world where anyone can spin up a convincing face, voice, or entire digital persona in a few minutes.

Today, attackers use deepfakes, voice cloning, and synthetic identities to slip into hiring funnels, reset MFA at the help desk, and move laterally inside companies. Most IAM tools still focus on devices and credentials, without truly knowing who is behind the screen. That human layer has quietly become the new attack surface.

That is where Incode comes in.

In the latest episode of Security You Should Know, Fernanda Sottil, Senior Director of Strategy at Incode Technologies, explains how Incode adds a real-world identity layer that plugs directly into existing IAM stacks.

She is joined by Nick Espinosa, host of the Deep Dive radio show, and Bozidar Spirovski, CISO at Blue Dot, who pressure-test the approach from a practitioner point of view.

The show is part of CISO Series, a media network that gathers vendors and practitioners in the same room, sparks honest conversations, and aims to deliver the most fun you will have in cybersecurity.

What You Will Hear In The Episode

In under 20 minutes, the conversation dives into the questions every security and IAM leader is wrestling with:

  • How do you verify identity when criminals deploy AI faster than security teams?
    Fernanda walks through how Incode uses biometrics, liveness, and risk signals to confirm there is a real person behind each interaction, not just a valid device.
  • What if a legitimate user or candidate gets blocked?
    The group discusses appeal paths, secondary reviews, and how to design controls that are both safe and fair.
  • How do you balance fraud prevention with UX and conversion?
    You will hear concrete examples of where to place verification in the hiring and workforce journey so that it feels natural instead of intrusive.
  • What happens when verification fails during high-stakes moments like MFA resets?
    The panel explores help desk workflows, voice-based attacks, and why visual and behavioral proof of life matters.
  • How does all of this stay compliant with GDPR and other privacy rules?
    Fernanda explains how Incode handles facial data, consent, and model training while respecting regional regulations.

If you care about stopping deepfake-driven fraud without breaking your hiring or access flows, this is a quick listen with a lot of signal.

Listen to the episode on Spotify, or listen below:

Full Transcript

[Voiceover] Connecting security solutions with security leaders. Security You Should Know starts now.

[Rich Stroffolino] Welcome to Security You Should Know. I’m your host, Rich Stroffolino. Today, we’re going to be talking about Incode Technologies and what they’re doing in identity verification. Now, the problem that they’re addressing is one we’ve seen in the news on cybersecurity headlines a whole lot over the last year, AI-powered fraud at the workplace.

Helping us figure out why this is such a recurring problem, we’re going to be talking with Nick Espinosa, the host of the nationally syndicated Deep Dive Radio Show, and Bozidar Spirovski, CISO over at Blue Dot. Nick, I’m going to start with you. Why are we still struggling with AI-powered fraud at work?

[Nick Espinosa] This is such, I think, a relevant conversation to have, primarily because AI is exploding in so many different ways. And by virtue of that, we have seen pervasiveness in everything from AI slop to AI-generated hallucinations being used in court to literal fraud in the workplace.

And quite frankly, nobody wants to hire Kim Jong Un’s second cousin thinking that you’re hiring somebody else, right? So this, I think, is one of those last bastions that goes way beyond the standard know-your-customer situation because we are now being able to use artificial intelligence through deep fakes and everything else to basically evade a lot of the traditional detection systems we’ve had to just verify identities.

So I think this is a fantastic conversation to have.

[Rich Stroffolino] All right, Bozidar, I’m going to come to you. Why are we still struggling with this AI-powered fraud?

[Bozidar Spirovski] I think I can frame it a bit differently. Because of all the digital transformation hype that we’ve been seeing for the past, I’d say, 20 years, the most successful adopters of digital transformation are criminals. They are adopting it as early as possible, testing it as early as possible, using it as early as possible.

So just as Nick said, everybody now is trying to use AI in some shape or form. Sadly for us, the criminals are great at it, like amazing at it. and they’re doing it all the time, and they’re ahead of the curve, really. Especially in these video fakes or face overlays, voice overlays, etc.

So I must say it’s a very interesting use case, a very challenging proposition, especially in a remote environment, but I would like to hear what our host has to say about this. It’s an interesting topic.

[Rich Stroffolino] All right. Well, today we’re going to be talking with Fernanda Sottil, Senior Director of Strategy at Incode Technologies. Now, to start out, we’re answering three essential questions. How do I explain the value of your solution to my CEO?

What does your solution do, and what does it not do? And what is the pricing model? Fernanda, can you help us out, give us these preliminaries?

[Fernanda Sottil] Sure, happy to go through this. So we solve for the next evolution of workforce risk. So the issue isn’t actually the technology. It is the human layer that our systems still trust, unfortunately, way too easily. So if you think about this, identity access management platforms authenticate devices, they authenticate credentials, but they actually don’t confirm who’s behind them.

This is exactly the gap that AI is exploiting, and this is why identity has become the new attack surface inside the enterprise. So I’ll give you very quickly two examples. During hiring, fake identities, fake candidates with stolen or AI-generated IDs are getting through interviews because their profiles, videos, and voices look very legitimate.

And actually, after onboarding, attackers can very easily impersonate employees using clone voices or deep fake calls to reset MFAs or at the help desk. So at all of these moments, that’s all it takes to bypass even the strongest IAM setup.

So Incode Workforce adds a real world identity layer to the identity access management system. So whether companies use Okta, use Microsoft Entra, use Ping Identity, what we do is we seamlessly integrate with those solutions and we verify the person behind every high-risk moment throughout the employee journey, from candidate interviews to day-one onboarding to help desk resets, MFA resets.

All of that is facilitated through a very easy setup.

So in terms of how it works, the enrollment process is through which an employee or a candidate verifies only once by scanning a government-issued ID and taking a live selfie. We verify with automated tech and government sources that this is a trusted identity.

And then at critical moments, at any future point, such as an MFA reset at IT support or at recruitment, Incode instantly verifies the person with a quick selfie match.

[Rich Stroffolino] And what are we looking at in terms of pricing?

[Fernanda Sottil] So we actually work through a per employee license model, the tiers based on number of employees. So it’s very seamless. We want to ensure that organizations trigger Incode at any point or at any workflow or key moments. So there’s no cap on usage.

[Rich Stroffolino] All right. So this is super interesting. I’m intrigued. I need to hear more. Bozidar, what are the questions do you have for Incode technologies?

[Bozidar Spirovski] Oh, so many. So I’m going to start with what I read on the website that you have is that you essentially train on people’s faces. You train your model on people’s faces. Am I right to understand that?

[Fernanda Sottil] Correct. So I think one of the key sources of our advantage comes from proprietary AI. So we train on billions of verifications that adapt very fast to fraud. So you think of very different type of attack vectors, some more sophisticated than others, we train on those data sets.

We make sure the data sets are representative on geography, on demography, and also on fraud type, to make sure that we’re always kind of one step ahead of attackers.

[Bozidar Spirovski] Okay. So in that context, my question coming from the European territories, is how do you comply with privacy, and how do you potentially avail our employees of not agreeing to their faces being part of training data set?

[Fernanda Sottil] That’s a very good question. We have a very robust privacy and compliance framework, and specifically for Europe. We are GDPR compliant. We do have servers. So the data stays and relies safely in the EU. All of the data is encrypted.

And we do have different verification types.

So once employees are verified, let’s say, match the government document with the employee profile or the employee directory, that information is immediately deleted, right? So there is kind of just that attestation that is being shared back to the company, and allows us to make sure that the employee is legitimate, that it’s not a deep fake, and that is who they claim to be without having any type of liability on the data side.

[Bozidar Spirovski] But do you then train your AIs on the faces that you have recorded?

[Fernanda Sottil] So if the employee does not provide consent or the organization does not provide consent, we immediately delete the data and we do not train on that specific vector.

[Bozidar Spirovski] Okay, thanks. I’m going to hand over to Nick for the next one, maybe.

[Nick Espinosa] Yeah, sure. I mean, I think this kind of opens up a can of worms in terms of trust, right? Like, where does trust originate? What does it require? So if Incode’s goal, for example, is, quote, “Powering a world of trust,” end quote, which I got off your website, what are the minimal conditions for trust?

So is it purely technological, like accuracy and security? Or is it also legal, social, psychological, physiological, anything like that? How does Incode measure or anticipate trust beyond, let’s say, error rates and fraud prevention then?

[Fernanda Sottil] That’s a very good question, Nick. And we actually have different levels of assurance in our platform. So if you think about the very basic question, the first question would be like, “Are you actually real?” right? So making sure that there’s, in fact, a live individual on the other side of the transaction.

The second level of trust would be, “Are you are who you claim to be?” right? So that requires a different set of information that we need to get from the employee in this case, maybe tied to a government document, a selfie, through facial recognition we performed the matching, and that allows us to verify that, in fact, you are Nick.

Then maybe the third level of trust would be like, “Can I safely transact with you?” right? And that’s when we integrate a lot of different data sources. We connect with government sources. We also connect with other transactional data.

And through our network, we’re able to kind of attest, is this individual associated with maybe illegitimate actions, first-party fraud, third-party fraud? And what is the trust score that we could issue back to the relying party?

So, I mean, not all organizations want to be at the third level of trust, right? So that’s why our technology and our architecture would be able to adjust based on what question individuals or organizations are trying to solve for.

[Nick Espinosa] Got you. So one quick follow-up, and then I will kick it over to Bozidar. So I have to understand, and as just an immediate dovetail, how do you balance global scale with local variation and identity documents, all those kinds of things?

So according to your website, you support roughly 4,600 different document types from 200 countries, but documents, norms, legal standards, jurisdictions, identity, privacy laws, all this kind of stuff is various and continuously changing. So how are you basically keeping up with that?

What are the tradeoffs that you’re making here between all of these different standards?

[Fernanda Sottil] That’s a very good question. I’ll try to answer it and capture mostly everything. So we do have a proprietary fraud lab, and basically the main mission of the fraud lab is to be able to deploy as more robust data collection methods as we can.

And in order to do that, we need to make sure that we have extremely high coverage globally. So we collect data through different sources. We have third-party companies that help us with data collection. We also do it ourselves for more specific fraud vectors.

We create those datasets in-house. And we also do it through our customers.

So one of the big benefits that we have is just scale. We work with one of the eight out of the ten top banks in the U.S., three out of the top four telecos in the U.S. So overall, the reach and the scale that Incode currently has allows us to just have a high volume of data that’s enabling us to iterate on fraud quicker than anybody else.

In terms of compliance standards and privacy standards, we do have a very strong last-mile team that’s able to assess what are up-and-coming regulation, what are things that we should be considering within our workflows, within our product from a compliance perspective.

We have very strong relationships with regulators. So overall, kind of the promise of Incode is being able to iterate with the environment in terms of fraud, in terms of regulation, and also in terms of compliance and user preferences.

[Bozidar Spirovski] Okay. So I’ll just jump in. Let’s talk briefly about flow. So I’m being a user in a company, and for whatever reason, I lost my credentials. Maybe I dropped my phone in the toilet, or whatever happened, happened. And now I need to reauthenticate.

And then I go through your process. For whatever reason again, your machine says I’m not who I am. What is my recourse in that question?

 And the follow up on that, which I think is even much more painful, is you mentioned candidates. And me being an insider to the company, I can probably find a way to get to somebody who knows me personally and unlock me. However, for candidates, that is a much more challenging topic.

So what would a candidate do if they consider that they have been misclassified?

[Fernanda Sottil] That’s a very good question, and that’s the big challenge that we work with, which I think also hits on a couple of Nick’s questions, which is the balance between best user experience or best conversion, also with accuracy on the fraud side.

So I think something that we need to understand is that Incode is preventing one of the key vulnerabilities that enterprises face today. So security is our biggest concern. That being said, our machine learning models are very, very customized. And that’s a benefit that we have because the machine learning models are created in-house.

So there’s a lot of different controls. There’s rules and there’s thresholds that allows us to fine-tune the models so that we make sure that we recognize very quickly those legitimate users and also do not block them and also do not let in attacks or fraudulent attempts.

In the case that you were mentioning, we actually have very strong retake mechanisms. So if a user is incorrectly blocked, then there is a capability for them to get another verification thing through which they can verify identity.

I don’t have the stat top of mind, but it’s less than 1% of users that would be incorrectly verified at their second try. So I think the likelihood that a user would be incorrectly blocked is extremely low. That said, we can fine-tune the thresholds.

On the candidate experience, you do call out a very important point, which is the balance, the tolerance that users have for friction is lower. So we need to ensure that the flows that we’re designing are even more intuitive and friendly for the end user with the right recovery processes in case they get incorrectly blocked.

[Bozidar Spirovski] So a follow up on that. Is there a way for the organization… And I’m thinking here about a couple of different, let’s say, employee or candidate experiences mainly because most candidates are already filtered through a bunch of AIs.

And sincerely said, people hate all these AIs. So let’s not beat around the bush.

And now, lo and behold, another AI. So question, from practical perspective, whether an organization using your product can see the error rate of your system, and whether they can decide to have a different flow if that error rate exceeds something, or in some specific cases?

I mean, “Let’s go manual,” for whatever reason.

[Fernanda Sottil] That’s a very good question. So we do offer two certain things. The first one is we offer a passive verification in which the user doesn’t need to do anything, doesn’t need to provide a government document or share their selfie. We verify the identity by connecting with government sources and verifying that the data that the user is presenting matches official databases.

So that’s a verification method of lower level of assurance, but it is for sure much less friction for the user.

Let’s say users advance through the process, they’re going through live interviews, or they’re maybe going through a background check. Then in addition to that, we can complement those processes with the live government document capture and selfie.

In terms of accuracy, we do share very, very strong analytics to customers and show them metrics for false positives, false negatives, conversion rates, etc., throughout every single stage of the funnel. And something that we offer customers is very strict SLAs, both on false-positive side, which means fraud that we let in, and false negative, which is legitimate users that we incorrectly blocked.

So we constantly fine-tune those models and those processes to ensure that we’re always exceeding those SLAs. If there were a situation where we’re, for some reason, blocking too many users, then we would be able to adjust and have some sort of additional method of verification.

From what you personally shared in terms of manual verification, we typically found that that actually results in worst SLAs just because there’s a human reviewer, there’s a cue. Human reviewers actually today cannot tell deep fakes from real or fake anymore.

So that just drives us to think automated technology is much better from a user experience standpoint in terms of speed, but also accuracy and the level of performance.

[Rich Stroffolino] We have time for one last question.

[Nick Espinosa] I have to ask, and I need to pivot a little bit about this. Because what you just said really makes me start thinking about future-proofing adversarial defense. So think about adversarial threats that we’re looking at right now, deepfakes, spoofing, morphing, synthetic identities, etc., etc.

Incode offers passive liveness, deep fake detection, fraud prevention networks, all this kind of stuff.

But we all know that adversarial, basically, attacks are going to evolve. They simply are, whether it’s AI or something else. So what is Incode’s strategy for anticipating new forms of fraud or spoofing? How quickly can you update or patch your models for detection pipelines?

And honestly, I hate to ask this question, but I think it’s important. Do you think there are attack vectors right now that you cannot currently defend or that might require fundamentally different technological or legal frameworks that Incode isn’t addressing, or is on the roadmap, or whatever that looks like?

[Fernanda Sottil] So that is a very good question, and I’m very glad that you asked it, Nick. I think our approach is to have a multi-pronged approach towards quickest defense. So in fact, in terms of adversarial attacks, it is a cat and mouse game. The quicker that we’re able to iterate, then fraudsters will be able to also move very quickly.

As you mentioned, there’s no way we can articulate the speed at which these things are evolving. We have a proprietary fraud lab, which is something that I already described that consists of data collection methods through our customers, through our own customers, through third-party organizations.

We also have a program with penetration testing companies. So we work from a range between five to eight different penetration testing companies that constantly test Incode’s technology month over month that are able to call out vulnerabilities and that are able to help our in-house fraud team remediate them and adjust them.

The other thing that we’re doing is we’re partnering with universities. So we’re now deploying our machine learning models in collaboration with very well-known universities. So the Purdue University in Indiana, it’s one of the universities that is at the forefront of fraud detection with everything that is related to computer vision.

So we’re partnering very closely with them to be able to ensure that our approaches are complementary to each other ,and that they’re also providing an additional perspective in terms of evals and trainings to call out where we have gaps.

So to your question, we do have gaps. We close them as quick as we can, sometimes as quick as even hours. And if we identify that there’s specific documents or fraud vectors that we currently don’t cover, those can be patched up over days.

[Rich Stroffolino] All right. And, Fernanda, what’s one thing we didn’t ask about that we need to know?

[Fernanda Sottil] Probably something that’s interesting for the audience is how do you work with other enterprise systems? So would you replace an Okta? Would you replace a Microsoft Entra? How does this work with an MFA, or with a single sign-on?

Something to call out is that we’re highly complementary. In fact, we have a very strong partnership with them. We have a turnkey integration with most of the identity systems out there. We also integrate with help desk systems. So if users need to trigger an Incode verification directly from a service desk platform, may it be Zendesk or ServiceNow, they can do so.

And we also integrate with HR systems or recruiting systems, ATS systems. So overall, the technology is very flexible. It’s very modular from what you saw. Our team is incredibly robust and strong. So we’re very excited to be partnering with companies that want to solve for this use case.

And we look forward to hearing more from other problems that we could tangentially solve with this type of solution.

[Rich Stroffolino] Well, that’s just about it for this episode of Security You Should Know. To learn more, head over to incode.com. And if you have any feedback or questions for Fernanda, send them over at feedback@CISOseries.com.

A huge thanks to Nick and Bozidar for helping us learn more about what Incode Technologies is doing. And thank you to you Fernanda for being game and for your time in answering all of these questions. And thank you for listening to Security You Should Know.

[Voiceover] That wraps up another episode of Security You Should Know. If you like this program, please subscribe, tell your friends, and leave us a review. All companies showcased on this program are sponsors of CISO Series. If your company would like to be spotlighted and interviewed by our security leaders, go to our contact page on CISOseries.com or just email us at info@CISOseries.com.

Thank you for listening to Security You Should Know: connecting security solutions with security leaders.

Transcription courtesy of Security You Should Know.

Chapters

Popular Topics

Subscribe to our newsletter

The latest insights on identity verification, fraud prevention,
and digital trust.

More from the Incode Blog

Discover more articles, news and trends.