Most hiring processes were built for in-person evaluation — and haven't been updated to account for the fraud risks that remote-only hiring introduces. 23% of companies encountered proxy interview fraud in 2023. 76% of hiring managers say AI has made impostor detection harder. And the median loss per fraudulent hire is measured in the hundreds of thousands. This checklist covers the controls every remote hiring team should have in place in 2025.
Before You Post the Role
☐ Define which roles require enhanced identity verification
Positions with access to sensitive systems, financial data, customer records, or intellectual property warrant stricter verification. Define these categories before hiring begins so the process is consistent and defensible — and so no one on your team has to make a judgment call mid-process.
☐ Audit your current verification gaps
Map your hiring process from application to offer and identify every step where identity is assumed but not confirmed. For most remote-first organizations, the honest answer is: everywhere after the resume stage.
☐ Confirm what your background check provider actually covers
Standard background checks verify a name, SSN, and credentials against databases — they do not verify that the person who interviewed is the same person whose background was checked. Confirm what your provider covers. The gap between those two things is where fraud lives.
During Screening and Before the Interview
☐ Verify identity before sending the video interview link
This is the most impactful single control you can add to your process. Requiring candidates to verify their government ID and complete a liveness check before receiving the interview link eliminates fraud at the entry point — before any of your team's time is spent. Once you've invested hours in a candidate, the pull to continue is real. Front-loading verification removes that pressure entirely.
☐ Cross-reference the application photo against LinkedIn and submitted ID
Before the interview, compare the photo on the application, the candidate's LinkedIn profile, and any submitted ID. Significant discrepancies in bone structure, skin tone, or facial proportions are worth flagging.
☐ Verify professional references independently
Don't call or email the references the candidate provided — find the individuals independently through LinkedIn or company directories and reach out directly. AI-generated reference networks and spoofed email addresses are a documented fraud vector.
During the Video Interview
☐ Ask an unannounced task requiring physical props
Request that the candidate hold up a specific item, write their name on paper, or perform an unexpected physical gesture. This is one of the most effective in-interview tests for AI-generated video — deepfake pipelines cannot incorporate real-time physical objects.
☐ Watch for visual deepfake indicators
Soft edges around the hairline and jaw, lip sync delays, unnatural blinking, overly smooth skin texture, and reluctance to turn sideways are all signals worth noting. See our full guide on how to detect a deepfake in a video interview.
☐ Ask the candidate to change camera angles
Request that they reposition their camera or turn sideways during the call. Deepfake pipelines are typically configured for a specific position — changing it can expose artifacts or cause the feed to degrade.
☐ Record the interview (with consent) and review at reduced speed
Deepfake artifacts that are invisible at normal speed often become apparent at 50% playback. If something felt off during the call, reviewing the recording slowly can confirm or rule out AI manipulation.
After the Interview, Before the Offer
☐ Verify credential claims through the issuing institution
Degree verification, professional license checks, and employment history verification should be conducted through the issuing institution directly — not through documents provided by the candidate, which can be fabricated.
☐ Require additional verification for sensitive roles
For positions involving access to classified information, financial systems, or customer data, consider requiring an in-person ID verification step or notarized identity confirmation before extending an offer.
☐ Document your verification process for each hire
Maintaining a record of what identity verification was performed, when, and by whom creates an audit trail that protects your organization if a fraudulent hire is discovered after onboarding.
Ongoing Process Controls
☐ Train your recruiting team annually on current fraud techniques
Deepfake technology evolves rapidly. What was a reliable visual tell six months ago may no longer be detectable today. Annual training keeps your team's awareness current with the threat.
☐ Create a clear escalation path for suspected fraud
Every recruiter on your team should know exactly what to do if they suspect a candidate is using AI-generated video or a false identity. An unclear escalation path leads to incidents being dismissed rather than investigated.
☐ Review and update your process after each confirmed incident
If you catch a fraudulent candidate — or discover one after hire — conduct a process review to identify where the controls failed and what would have caught it earlier.
One item from this checklist will do more than all the others combined.
Pre-interview identity verification stops fraudulent candidates before they ever reach your team. Stop Deepfake Candidates handles government ID matching and liveness detection in under 2 minutes, for $5 per verification. No subscription.
Add Verification to Your Process →