In partnership with

👋 Welcome to Unlocked

Identity used to mean something fixed — your face, your fingerprints, your voice, your patterns. But in 2025, all of that can be convincingly faked.

Deepfake biometrics, voice-cloned phone scams, AI-generated behavioral signatures… attackers no longer need your password. They can be you — or at least a version of you convincing enough to fool identity systems built a decade ago.

This week, we’re examining the rise of digital doppelgängers: AI-constructed replicas capable of bypassing biometric systems, hijacking identity workflows, and undermining trust in verification itself.

Let’s break it down.

✉️ Our Sponsor

Realtime User Onboarding, Zero Engineering

Quarterzip delivers realtime, AI-led onboarding for every user with zero engineering effort.

Dynamic Voice guides users in the moment
Picture-in-Picture stay visible across your site and others
Guardrails keep things accurate with smooth handoffs if needed

No code. No engineering. Just onboarding that adapts as you grow.

🎭 From Deepfakes to Full Identity Replication

AI-driven impersonation has advanced far beyond fake videos.

Attackers now combine multiple AI tools to build composite identities:

  • Voice cloning from meeting recordings or voicemail

  • Facial deepfakes generated from a single social media photo

  • Behavioral mimicking learned from keystrokes, browsing patterns, and login routines

  • Synthetic documents generated through language models trained on corporate templates

According to a 2025 report from the Center for Security and Emerging Technology (CSET), AI-powered impersonation attempts increased more than 300% compared to 2023 — largely driven by cheap, accessible cloning tools.

And identity systems are struggling to keep up.

Real-World Examples

  • Corporate deepfake CFO scam (Hong Kong) — attackers used AI video + voice cloning in a conference call to convince an employee to transfer over $25 million (BBC News).

  • Bank voiceprint bypass — researchers at University College London showed they could bypass major banks’ “voice ID” systems using consumer-grade tools.

  • Zoom impersonation scams — cloned faces have already been used to impersonate executives in virtual meetings (EasyDMARC).

The threat is no longer hypothetical. It's operational.

🧬 Biometric Spoofing: When Your Face Isn’t Enough

Biometric authentication — once considered the gold standard — is now a target.

AI tools can produce:

  • Ultra-realistic facial deepfakes

  • AI-generated fingerprints that match multiple templates

  • Synthetic voiceprints

  • 3D-printed face masks that trick some facial systems

  • Gait imitation models that reproduce walking patterns

The problem isn’t that biometrics are bad — it’s that they’re hard to protect.
Once stolen, they can’t be rotated or reset.

Why Biometrics Are Struggling

  • No revocation: You can't change your face.

  • Cross-system reuse: Same biometric used for bank, work device, building access.

  • Image and audio abundance: Social media provides training data for free.

  • Attackers only need “good enough,” not perfect: Liveness checks vary drastically across systems.

NIST continues to warn that biometrics must never stand alone without a second factor.

🔍 Behavioral Impersonation: The New Frontier

The newest wave of identity cloning doesn’t target your face or your voice — it targets your habits.

Machine-learning tools can replicate:

  • Typing speed and cadence

  • Mouse trajectories

  • Login schedules

  • Application switching behavior

  • Network patterns

  • Touch screen pressure

This is alarming for one reason:

➡️ Behavioral biometrics were supposed to be the last line of defense.

In 2025, emerging research (Carnegie Mellon CyLab) shows that AI models can now generate synthetic behavioral profiles capable of bypassing many legacy “continuous authentication” systems.

The attacker doesn’t just look like you —
they act like you.

⏱️ Why Continuous Authentication Is Now Mandatory

Traditional authentication assumes:

  • Verify once

  • Grant access

  • Trust indefinitely

That model collapses when:

  • Identity can be cloned

  • Sessions can be hijacked

  • User behavior can be replicated

Modern environments — especially remote and hybrid — require continuous identity verification that evaluates context, risk, environment, and behavior throughout the user’s session.

What “continuous” looks like in practice:

  • Real-time monitoring of behavior shifts

  • Device posture checks during a session

  • Network anomalies (new IP, proxy, etc.)

  • Keystroke changes or mouse pattern deviations

  • Session revalidation triggered by risk spikes

This is where anomaly detection and risk-based access become essential — themes we covered in previous editions.

⚠️ What CISOs Need to Watch Right Now

1. Voice Biometrics Are Becoming Unsafe

If your contact center relies on “voice match,” assume it can be bypassed.

2. Executive Impersonation Attacks Will Surge

Deepfake board meetings, fraudulent approvals, manipulated video calls.

3. Session Hijacking Will Outpace Password Theft

Attackers skip authentication and go straight for the active session token.

4. Privacy Will Become a Constraint

Continuous authentication must balance user monitoring with data minimization.

5. Identity Logs Must Be Immutable

If an attacker looks like the user, logs may be your only source of truth.

🛡️ How to Defend Against Digital Doppelgängers

For IT & Security Teams:

  • Shift toward continuous authentication

  • Require phishing-resistant MFA for privileged accounts

  • Adopt device-bound passkeys with local storage

  • Deploy session anomaly detection across identity flows

  • Add escrow delays before high-risk operations (like Apple’s Stolen Device Protection model)

  • Invest in identity threat detection & response (ITDR)

For Security Leaders:

  • Assume all biometrics can be cloned

  • Audit which workflows rely on voice or facial verification

  • Kill session persistence wherever possible

  • Promote employee training on deepfake threats

  • Engage legal teams early around identity spoofing liability

💡 Unlocked Tip of the Week

Run a “deepfake resilience test”:

Have your red team attempt a social engineering call using an AI voice clone of an executive (with permission).

The results will reveal exactly where your human and technical defenses are weakest — before an attacker discovers it for you.

📊 Poll of the Week

🙋 Author Spotlight

Meet Nick Marsteller - Head of Content

With a background in content management for tech companies and startups, Nick Marsteller brings creativity and focus to his role as the Head of Content at Everykey.

Over his career, Nick has supported organizations ranging from early-stage startups to global technology providers, driving initiatives across digital content and branding. With a background spanning SaaS, cybersecurity, and entrepreneurial ventures.

Outside of work, Nick loves to travel, attend concerts with friends, and spend time with family and his two cats, Ducky and Daisy.

Wrapping Up

Digital identity used to be something only you possessed.
In 2025, identity is something anyone can copy — if they have enough data.

Deepfakes, cloned voices, and AI-generated behavioral profiles aren’t “future threats.”
They’re real, operational, and increasingly accessible.

The solution isn’t adding more passwords or more biometrics —
it’s redesigning trust so that identity is verified continuously, contextually, and intelligently.

The attackers are using AI.
We need to use AI better.

Until next time,

The Everykey Team



Keep Reading

No posts found