Last week, three of the most influential organizations in financial security published a coordinated pair of papers that should change how every crypto exchange and fintech thinks about identity verification. The papers were authored by over 130 experts — senior executives from major banks, federal and state regulators, and security specialists — and their conclusion is unambiguous: onboarding-time identity verification is being systematically defeated by AI, and the industry needs biometric step-up verification at the point of transaction.
This isn’t a vendor whitepaper. It isn’t a consultant’s pitch deck. It’s the coordinated assessment of the American Bankers Association, the Better Identity Coalition, and the Financial Services Sector Coordinating Council — the organizations that represent the institutions processing trillions of dollars annually.
Onboarding-time identity verification is being systematically defeated by AI. The industry needs biometric step-up verification at the point of transaction.
The 10 attack categories
The first paper — “Mitigating AI-Powered Attacks Against Identity and Authentication” — organizes AI-powered threats into 10 specific categories across three attack vectors. Each one is a documented, in-the-wild technique that is actively being used against financial institutions right now.
Vector 1: Deepfake-driven impersonation
- Deepfakes defeating KYC verification. AI-generated faces now pass first-generation liveness detection systems. Attackers create synthetic identities that clear automated onboarding checks, opening accounts that were never tied to a real person.
- Real-time deepfake fraud in live calls. In 2024, a Hong Kong finance worker transferred $25 million after a deepfake video call impersonating the company’s CFO. The attacker generated the CFO’s face and voice in real-time. Multiple participants on the call were AI-generated.
- LLM-driven vishing. Voice-cloning models generate real-time adaptive phone conversations. The AI responds to questions, adjusts its tone, and maintains character over extended calls. Traditional “call back the known number” defenses fail when the attacker calls from a spoofed number.
- AI-generated fraudulent documents. Synthetic driver’s licenses, passports, and utility bills that pass automated document verification. The 311% increase in synthetic document fraud from Q1 2024 to Q1 2025 reflects the commoditization of these tools.
Vector 2: Synthetic identity creation
- Synthetic identity blending. Real Social Security numbers — often belonging to children, the elderly, or the deceased — combined with fabricated names, addresses, and biographic data. These identities build genuine credit histories over months before being used for fraud.
- AI-enhanced credential stuffing. Credential stuffing bots that dynamically adapt to evade detection — rotating fingerprints, solving CAPTCHAs, mimicking human typing patterns. Traditional rate limiting and device fingerprinting no longer work.
- Adversarial attacks on biometric liveness. Injection attacks that bypass the physical camera entirely, feeding synthetic video directly to the verification API. The paper flags this as a critical concern — most liveness detection systems were trained on datasets that did not include high-quality AI-generated content.
Vector 3: AI agents as attack surrogates
- AI-generated phishing at 95% lower cost. LLMs have reduced the cost of phishing campaigns by over 95% while maintaining equal or better success rates. Personalized, grammatically perfect, contextually relevant phishing at industrial scale.
- Autonomous account takeover agents. AI agents that execute multi-step account takeover sequences — credential testing, session management, fund movement — without human intervention. The attacker sets objectives; the AI handles execution.
- Automated social engineering at scale. Personalized manipulation across thousands of simultaneous conversations. Each target receives messages tailored to their communication style, professional context, and emotional vulnerabilities.
The numbers behind the threat
- 700% — Increase in fintech deepfake incidents, 2022–2023
- 311% — Increase in synthetic document fraud, Q1 2024–Q1 2025
- $40B — Projected AI-enabled fraud losses in U.S. by 2027 (Deloitte)
- 42% — Of SARs tied to identity / authentication compromise
The Deloitte projection — $40 billion in AI-enabled fraud by 2027 — isn’t a scare number. It’s an extrapolation from current trends that the paper’s 130 expert authors validated. Two-thirds of banks and fintechs already reported rising fraud rates in 2025. The cost of launching a phishing campaign has dropped by 95%. And 60% of people have fallen victim to AI-automated phishing.
What they recommend
The second paper — “Recommendations for Policymakers” — makes 20 specific policy recommendations. Jeremy Grant, the Better Identity Coalition Coordinator, highlighted four with the broadest cross-sector impact:
1. Accelerate NIST’s liveness detection guidance. The papers call for faster development of NIST SP 800-63-4, which will define minimum standards for biometric liveness detection. Current CIP procedures have no explicit technology standards for liveness — meaning a static selfie comparison satisfies the same requirement as a multi-frame anti-deepfake challenge. The papers argue this gap is untenable.
2. Federal grant program for identity infrastructure modernization. State-level identity systems — driver’s licenses, vital records — are the foundation of financial KYC. Many are decades old. The recommendation ties federal grants to adoption of NIST guidance, creating a financial incentive to modernize.
3. Expand SSA’s electronic SSN verification. Currently limited to credit-related financial services. The papers recommend expanding access to all financial institutions for real-time identity corroboration.
4. Multi-agency task force to monitor AI identity threats. A standing body that tracks the evolution of AI attack techniques and updates guidance accordingly. Not a one-time report — an ongoing surveillance function.
The biometric recommendations are explicit
Buried in the mitigations paper are recommendations that read like a product specification for transaction-level biometric step-up verification:
- Financial institutions must audit their identity verification vendor’s liveness detection capabilities — specifically: what version, active or passive, tested against injection attacks, and false acceptance rate for AI-generated images
- Most commercial liveness systems were trained on datasets that did not include high-quality AI-generated content — their detection models are playing catch-up against adversarial AI
- Injection attacks — bypassing the physical camera to feed synthetic video directly to the verification API — are flagged as a critical and underappreciated threat
- Multi-frame facial analysis tracking subtle involuntary muscle movements should replace single-frame comparisons
- CIP procedures need explicit technology standards for liveness detection, not just “verify the customer”
- SMS OTP should be deprecated as a primary second factor for high-risk transactions
- FIDO passkeys and security keys are endorsed for phishing-resistant authentication
The papers also recommend adoption of mobile driver’s licenses (mDL) using ISO/IEC 18013-5 cryptographic binding. The argument: a deepfake cannot spoof possession of a private cryptographic key stored in a phone’s secure enclave. Combining mDL presentation with a live biometric check creates two independent verification factors that AI cannot simultaneously defeat.
Why this matters for crypto
These papers are aimed at traditional banking. But every attack category applies to crypto with equal or greater force. Crypto exchanges face the same deepfake KYC attacks, the same credential stuffing, the same social engineering — plus irreversible transactions, 24/7 operation, and pseudonymous withdrawal addresses that make fund recovery nearly impossible.
A bank can reverse a wire. A blockchain cannot reverse a transaction.
That asymmetry makes the case for biometric step-up verification even stronger in crypto than in traditional banking. When there is no fraud recovery mechanism, the verification must happen before the transaction, not after. And that verification must be resistant to the 10 attack categories these papers describe.
Consider the timeline: a Kraken user lost $18.2 million to social engineering in March 2026. The attacker moved funds within 45 minutes, bridging ETH to BTC via THORChain using a no-KYC exchange. By the time anyone noticed, the funds were across three chains and effectively unrecoverable. A biometric step-up check triggered by the withdrawal amount — before the first bridging transaction — would have stopped the entire chain.
The convergence
What makes this week unusual is the convergence. The ABA/Better Identity Coalition papers landed the same week as the FBI IC3 report ($11.4 billion in crypto fraud), the FDIC’s GENIUS Act NPRM, FinCEN’s BSA/AML reform rule, and Treasury’s state regulatory equivalence framework. The data, the expert consensus, and the regulatory machinery all arrived simultaneously.
The data says: $11.4 billion lost, AI fraud surging, deepfakes defeating first-generation liveness detection.
The experts say: audit your liveness detection, require multi-frame analysis, deprecate SMS OTP, implement injection attack defenses.
The regulators say: BSA/AML obligations are non-negotiable, innovative detection methods are encouraged, every stablecoin holder is a “customer.”
All three point to the same conclusion: static onboarding KYC is no longer sufficient. Transaction-level biometric verification — with modern liveness detection, anti-injection protections, and signed cryptographic attestation — is what the industry’s own experts are calling for.
What the API response looks like
When a biometric step-up check runs at the moment of a high-risk transaction, the API returns a signed JWT containing:
{
"verification_id": "v_8f3a2b1c",
"user_id": "usr_exchange_customer",
"action_verified": "withdraw_50000_usdc",
"match": true,
"confidence": 0.97,
"liveness_score": 0.97,
"liveness_method": "passive",
"anti_spoof": {
"combined": 0.97, "passed": true, "threshold": 0.62, "triggered": null
},
"verified_at": "2026-04-08T14:32:01Z",
"expires_at": "2026-04-08T14:33:01Z",
"latency_ms": 292
}
That response is a signed, timestamped, cryptographically verifiable record that a live human — not a deepfake, not an injection attack, not an AI agent — intentionally authorized a specific action at a specific moment. It expires in 60 seconds. It cannot be replayed. It cannot be pre-generated. It addresses every attack category in the FSSCC papers.
130 experts described the problem. 20 recommendations outlined the direction. The implementation is one API call.