Skip to main content
All Articles
RiskJan 2, 20258 min read

How Proxy Fraud is Sabotaging Data Teams (and What CDOs Can Do About It)

CS

Chantal Schutz

CFO, myBasePay

At the end of 2024, 17% of hiring managers reported encountering suspected deepfake interviews, up from just 3% the previous year — a nearly 500% increase in just 12 months.

The Proxy Problem: By the Numbers

At the end of 2024, 17% of hiring managers reported encountering suspected deepfake interviews, up from just 3% the previous year, representing a nearly 500% increase in just 12 months. According to HYPR’s 2025 State of Passwordless Identity Assurance report, 95% of organizations experienced a deepfake incident in the last year, and nearly 40% had a GenAI-related security breach.

For data and AI teams specifically, modern proxy hiring operations now use AI-powered recruitment technology to fool even seasoned technical recruiters. Video manipulation software means that a job applicant can be deepfaked into existence in just 70 minutes.

The Impact of Remote Work

The move to remote hiring has created the perfect environment for proxy fraud. As of March 2025, 22.8% of U.S. employees worked remotely at least part of the time. When you’re interviewing a machine learning engineer or data scientist remotely, the usual safeguards are no longer in place:

  • No in-person verification
  • Difficulty validating technical credentials in real-time
  • Limited ability to cross-reference identity documents
  • Reduced oversight during the initial work period

For AI and data roles specifically, this creates additional risks. These positions often require access to sensitive data, proprietary code, and intellectual property that could be worth millions. When the person who interviewed isn’t the person doing the work, organizations face not just fraud but potential data breaches, IP theft, and compliance violations.

The CDO’s Dilemma

Chief Data Officers face an impossible choice: move fast to stay competitive or implement thorough vetting processes that slow down critical initiatives. The pressure to launch AI projects, build LLM capabilities, and demonstrate ROI means many organizations are accepting higher risks in their hiring processes.

The Vendor of Record Solution

This is where Vendor of Record and Employer of Record models become invaluable for data and AI leaders. Rather than sourcing talent directly, a VOR serves as the compliance and risk management layer between your organization and your talent ecosystem.

Here’s how it works:

Robust Identity Verification: Multi-factor identity verification during onboarding, video-based verification sessions, biometric verification for ongoing remote work, and regular check-ins that flag unusual behavior patterns.

Compliance Consolidation: Unified compliance standards across all talent, access to pre-vetted supplier networks, consistent IP assignment agreements, and standardized data security and GDPR compliance protocols.

Financial Control: Consolidated invoicing, clear cost tracking across projects, clean audit trails, standardized rate structures, and budget forecasting based on unified contractor data.

The Strategic Advantage

The organizations that will win the AI race aren’t necessarily those who hire the fastest; they’re the ones who build sustainable, secure, and scalable processes. A VOR model allows CDOs to focus on strategy and implementation while ensuring their talent ecosystem is compliant, efficient, and fraud resistant.

For Chief Data Officers, the question isn’t whether to use contractors and consultants to build AI capabilities; it’s how to do so safely, efficiently, and at scale.

Stay Informed

Subscribe to our newsletter for the latest insights on workforce compliance, contingent labor strategy, and industry trends.