On April 6, the FBI released its 25th annual Internet Crime Complaint Center report. For the first time in the report’s history, total cybercrime losses exceeded $20 billion — $20.877 billion across 1,008,597 complaints. Also for the first time: more than half of the dollar losses involved cryptocurrency.

$11.366 billion in crypto-related fraud. Up 22% from $9.3 billion in 2024. Up 66% from $5.6 billion in 2023.

The curve is not flattening.

78% of victims contacted by Operation Level Up had no idea they were being defrauded. They passed KYC. They verified their identity. Nothing verified the integrity of their actions after that.

Here’s the number that should concern every compliance officer in crypto: 78% of victims contacted by the FBI’s Operation Level Up had no idea they were being defrauded. They weren’t careless. They weren’t unsophisticated. They passed KYC. They verified their identity. They onboarded through the same compliance pipeline as every other user. And then they lost their money — because nothing verified the integrity of their actions after that initial check.

The fraud is identity-shaped

The IC3 report breaks crypto fraud into categories. The overwhelming majority — $7.228 billion — came from investment scams, primarily pig butchering operations. These are slow-burn social engineering attacks where a scammer builds trust over weeks or months, then directs the victim to move funds to attacker-controlled wallets through legitimate exchanges.

The victim logs into their own exchange account. With their own credentials. Past their own 2FA. And initiates a withdrawal they believe is an investment. The exchange has no way to distinguish this from a legitimate transaction. The KYC check happened weeks ago. The session is valid. The credentials are correct. The human is real. The intent is manipulated.

Another $1.4 billion came from recovery scams — fraudsters re-targeting previous victims by posing as law enforcement or recovery specialists. Another $389 million through crypto ATM fraud, up 58% year-over-year. In every case: the person initiating the transaction had already been verified at onboarding. The fraud happened at the moment of action.

AI just made it worse

For the first time, the IC3 report tracks AI-enabled fraud as a separate category: 22,364 complaints generating $893 million in losses. The FBI acknowledges this is likely undercounted — many victims don’t realize AI was involved.

The attack categories are exactly what you’d expect: deepfake video calls for business email compromise ($30M+ confirmed), AI voice cloning for impersonation scams, deepfake celebrity endorsements for investment fraud, and AI-generated identification documents that pass automated verification checks.

That last category is the one that should keep identity verification vendors awake at night. If AI can generate documents that pass onboarding KYC, then onboarding KYC is no longer a reliable identity anchor. The verification that happened on day one may have been fraudulent from the start.

But even when the onboarding was legitimate — even when the person is exactly who they claim to be — the fraud still happens. Because the problem isn’t who they are. It’s whether they’re acting of their own free will, with full understanding, at the moment they hit “confirm.”

What the FBI recommends

The FBI’s own SIM swap advisory explicitly recommends biometrics and physical security tokens as multi-factor authentication methods. Their guidance states that MFA should be deployed consistently, not selectively — an implicit argument for step-up verification on every high-risk action, not just at login.

Operation Level Up — the FBI’s proactive victim outreach program — prevented $225.9 million in losses in 2025 by contacting victims before they completed transfers. Think about what that means architecturally: the FBI is essentially performing manual, human-in-the-loop step-up verification. An agent calls the victim, confirms their identity, explains the fraud, and stops the transaction.

That process saved $225.9 million. It also referred 38 victims to suicide intervention. The human cost of this problem is real.

But Operation Level Up doesn’t scale. It reached a fraction of 181,565 victims. It requires human agents with security clearances making phone calls. It works after the fraud is detected, not before the transaction is authorized.

The architectural gap

Every major exchange — Coinbase, Kraken, Binance, Gemini — requires identity verification at account creation. This satisfies regulatory requirements under BSA/AML. It satisfies FinCEN’s Customer Identification Program. It generates the audit trail that compliance teams need.

But it creates a dangerous assumption: that a person verified on day one is the same person, acting with the same intent, on day 90. Or day 180. Or at 3AM on a Sunday when they’re being coached through a “high-yield investment” by someone they’ve never met in person.

The gap between onboarding verification and transaction execution is where $11.4 billion disappeared.

KYC at onboarding → days/weeks/months pass → credentials stolen / user coerced / session hijacked → transaction authorized with no re-verification → funds gone, irreversible.

What step-up verification changes

Insert one check at the moment of high-risk action: a biometric step-up verification that confirms the actual human, in real-time, before the transaction executes.

The user initiates a $50,000 withdrawal. Before it executes, the camera opens. Liveness detection runs — not a static selfie comparison, but an anti-deepfake challenge that confirms a live human is present. The system returns a signed JWT binding the verified human to that specific transaction hash. The JWT expires in 60 seconds. It cannot be pre-generated. It cannot be replayed. It cannot be faked by an AI on a video call.

For the pig butchering victim: the biometric check creates a forced moment of friction. The scammer cannot complete the verification on the victim’s behalf. The victim must physically face their camera and confirm the withdrawal. This doesn’t prevent all social engineering — but it forces a moment where the victim must consciously re-engage with the reality of what they’re authorizing.

For the account takeover: the attacker has stolen credentials, bypassed 2FA via SIM swap, and is logged into the victim’s account. They initiate a withdrawal. The biometric check fires. The attacker cannot produce the victim’s face. Transaction blocked.

For the AI-generated identity: the deepfake passed onboarding KYC with a synthetic document and generated face. Months later, they attempt a large withdrawal. The biometric step-up check compares against the enrollment embedding — which was also synthetic. But liveness detection catches the injection attack. The system flags the discrepancy. Transaction held for review.

The regulatory tailwind

The same week the FBI report dropped, three federal regulatory actions landed:

April 1: Treasury published an 87-page NPRM establishing state regulatory equivalence standards under the GENIUS Act. BSA/AML obligations are described as a non-negotiable federal floor.

April 7: The FDIC Board approved its own NPRM implementing GENIUS Act prudential requirements for bank subsidiaries issuing stablecoins — including mandatory AML/KYC compliance and a 2-business-day redemption requirement.

April 7: FinCEN proposed a fundamental reform of BSA/AML programs, shifting from prescriptive checklists to risk-based effectiveness. The rule explicitly encourages innovative or novel methods to detect illicit digital asset activity.

The OCC’s February NPRM goes further: it defines every stablecoin holder as a “customer” requiring full Customer Due Diligence. That’s millions of new identity verification obligations at unprecedented scale.

None of these rules explicitly mandate biometric step-up verification. But all of them create a regulatory environment where static, one-time KYC is increasingly insufficient — and where regulators are actively inviting technology that closes the gap between onboarding verification and transaction integrity.

$11.4 billion is the answer

Is onboarding KYC enough? The FBI just gave us the data to answer definitively: no.

Every dollar of that $11.4 billion was lost by someone who had already verified their identity. The onboarding check worked. The fraud happened anyway. Because identity verification at account creation answers the question “who is this person?” It does not answer the question that matters at the moment of transaction: “is this person, right now, intentionally authorizing this specific action?”

That question requires verification at the moment of action. Not at onboarding. Not at login. At the moment the money moves.

One API call. One signed JWT. 292 milliseconds (warm path). That’s the gap between $11.4 billion lost and $11.4 billion saved.

Full documentation: loricaapi.com/docs | Live demo: loricaapi.com/demo