How GenAI is Fueling the New Period of Applicant Fraud in Healthcare

Editorial Team
12 Min Read


Julia Frament, Head of World Human Assets at IRONSCALES

Each HR and expertise acquisition (TA) skilled has skilled a nasty rent. You rigorously evaluation and assess a candidate’s abilities, expertise, and credentials. You consider them via a collection of interviews with a number of stakeholders. You’re all however sure they’ll be an ideal match for the function. And, then, three weeks into their tenure you understand you’ve made an enormous mistake. 

The occasional unhealthy rent is all however inevitable. However now, HR and TA professionals have one thing far more insidious to fret about—hiring somebody that’s not who they are saying they’re; or isn’t even actual. With the arrival of generative AI and deepfakes, the world of applicant fraud has entered a complete new period. Now, unhealthy actors are utilizing AI and deepfakes to manufacture artificial identities or impersonate others to be able to achieve entry to delicate information, techniques, and extra. And, given the highly-sensitive nature of its information and operations, the healthcare trade is ripe for exploitation.

From Occasional Fibs to Outright Fabrication: Applicant Fraud Has Developed

Applicant fraud is nothing new. Job candidates have been inflating credentials, falsifying instructional backgrounds, and even faking references for many years. However, with the appearance of generative AI and deepfaked media, the phenomenon has shifted from piece-meal manipulation to wholesale fabrication. 

As we speak, you could find job candidates that aren’t merely misleading, however fully artificial—the place their resume, cowl letter, LinkedIn profile, and even bodily likeness has been generated from skinny air utilizing AI.

Think about your group is hiring for a brand new IT administrator function. This can be a fully-remote place and the candidate will likely be tasked with monitoring databases, troubleshooting software program, and different, related duties. You submit the job itemizing in your web site and wait. Amid the handfuls or a whole lot of real candidates, nonetheless, there may be one candidate that stands out.
This candidate’s resume, cowl letter, and LinkedIn profile all line up completely with what you’re searching for in a candidate, with in depth, highly-relevant expertise, and all the mandatory abilities and credentials. It appears nearly too good to be true, however you go forward and schedule an interview over Zoom to really feel them out. On Zoom you’re greeted by a pleasant face who thoughtfully and eloquently solutions your interview questions with poise and professionalism. They show experience and possess a wealth of information. There’s a slight lag within the audio, however you don’t suppose a lot of it.

You rent the candidate, mail them their work gadget, and inside weeks of them beginning their new job, your safety workforce begins to note odd conduct: delicate information being accessed and exfiltrated, malicious software program being put in, lateral motion being made via the community. Quickly after the safety workforce takes motion, you uncover that nothing about that candidate was actual. Behind the masks was a malicious actor utilizing subtle expertise to get employed and achieve entry to your delicate techniques and information.

From GenAI to Deepfakes: The Anatomy of AI-Powered Applicant Fraud

The above situation may sound like a Black Mirror episode, nevertheless it’s far more reasonable (and far simpler to tug off) than you may suppose. In actual fact, one thing similar to this actual situation occurred final yr, when a cybersecurity firm Knowbe4 was duped into hiring a North Korean operative from the DPRK. If a big, subtle agency like Knowbe4 can fall sufferer to those assaults, then it’s secure to say that anybody is in danger. 

That’s as a result of the deck could be very a lot stacked towards organizations at the moment. With a couple of widely-available AI instruments, a little bit little bit of time, and a few audacity, nearly anybody may pull off a rip-off of this type. And, sadly, meaning this development isn’t going away anytime quickly. In actual fact, in a 2025 report, Gartner forecasted that, by 2028, 1 in 4 candidate profiles will likely be faux.

So, how do they do it? Effectively, it begins along with your very personal job posting. With the job description, hiring particulars, and available details about your group, the faux applicant can immediate a generative AI device to create the proper cowl letter and resume for the job. For the identification, they may even use generative AI together with deepfaked pictures to construct out a web-based presence on websites like LinkedIn, both impersonating an actual particular person or fabricating a wholly artificial one. 

They will record faux references, spoof telephone numbers, and use real-time deepfake audio instruments to change their voice or impersonate others. Instruments to do that are already low cost, plentiful, and extensively accessible. Lots of the newest audio deepfake instruments require just some seconds of reference audio to be able to clone a voice, in addition to providing numerous ready-made, artificial voices.

And for the interview stage, deepfake video instruments can be utilized to superimpose artificial likeness or impersonate actual ones in actual time. This very method was used to nice impact in 2024, when a hacker used deepfake expertise to impersonate an organization’s CFO on a Zoom name. Wanting and sounding identical to the actual CFO, the hacker efficiently satisfied a extra junior finance worker to wire $25 million to an account beneath his management.

What You Can Do to Detect and Defend In opposition to AI-Powered Applicant Fraud

As some of these scams develop into extra widespread and extra subtle, HR and TA professionals should undertake new instruments and methods to defend towards them. Fortunately, there are clear steps that each group can take at the moment to assist cut back their danger of falling sufferer to those assaults. 

Firstly, hospital recruiters and TA specialists must be cautious of the “overly-polished” or “mirror” software supplies. Resumes and canopy letters that seem too good to be true, and/or parrot again an excessive amount of of the data and verbiage straight from the job software ought to increase suspicion. With that being stated, this isn’t at all times an indication of a faux or fraudulent applicant. A current international survey from Certainly discovered that over 70% of job candidates now use generative AI instruments throughout their job hunt. So, indicators of its use are removed from being proof sufficient of fraud. 

As a substitute, organizations must be looking out for overuse. And even then, it must be taken as a pink flag, not definitive proof of a scammer. Issues that ought to actually increase the alarm are issues like IP addresses that don’t match the candidate’s acknowledged location, the usage of digital telephone numbers, and chronic VPN use. Whereas nobody single factor right here tells you a candidate is fraudulent, these alerts taken collectively ought to encourage you to take pause. 

Maybe one of the best protection, nonetheless, comes when partaking with the candidate dwell. Probably the most highly effective protection is in fact an in-person interview. However, these aren’t at all times an possibility. Particularly when coping with IT hires and contractors. 

However concern not, there are tips you should utilize to detect deepfake use throughout video calls: 

  • Go Off Script – Asking the applicant uncommon or surprising questions could be a good solution to assess their authenticity. For instance, we regularly ask candidates to show round and contact no matter is hanging on their wall behind them to show authenticity. 
  • Search for Lag – Should you’re noticing persistent disconnects between the candidates audio and video (lag), or if their mouths aren’t monitoring with the phrases they’re saying, cease and ask if they may probably shut purposes or transfer to enhance their connection. 
  • Wave Hiya – Motion could be a good inform for video deepfakes. Ask the candidate to wave their hand in entrance of their face and search for glitches or irregularities of their video.

Unusual as these interview techniques could appear, there’s a superb probability they’ll quickly develop into par for the course. Modifications within the menace panorama require adjustments in pondering. And with the stakes being as excessive as they’re, a little bit little bit of uncomfortable discuss is nicely value it on your group’s safety.

Don’t Let Your Group Get Left Behind

As some of these applied sciences proceed to evolve and advance, healthcare organizations should discover ways to battle hearth with hearth. Whereas the steerage above can go an extended solution to defending your healthcare group, AI and deepfakes are persevering with to develop extra highly effective and convincing by the day. 

And with the healthcare trade constantly ranked as a “favourite goal” of menace actors, it’s going to develop into more and more vital for healthcare organizations to undertake AI-enabled instruments and applied sciences of their very own—ones aimed toward detecting and stopping some of these scams lengthy earlier than even coming face-to-face with these “faux” candidates.


About Julia Frament

Julia Frament is the World Head of HR for AI-powered e mail safety group IRONSCALES. She focuses on aligning HR methods with enterprise targets whereas protecting the corporate’s individuals on the coronary heart of every thing it does. She leads a world workforce of HR Generalists, HR Enterprise Companions, and the Head of World Expertise, working collectively to empower IRONSCALES’ workforce and drive significant development.

Share This Article