Four incidents, one pattern

Every time you sign up for a crypto exchange, a neobank, or a financial app, you hand over the most sensitive data you have. Government ID. Selfie. Address. Date of birth. Social Security number. You do this because the platform says it needs to verify your identity. You trust that the company handling this data is protecting it.

In the first three months of 2026, that trust collapsed across multiple fronts — simultaneously.

The companies you trusted to verify your identity became the source of the data the attacker now uses to impersonate it.

Each of these happened in Q1 2026. Each involves a different company, a different country, and a different failure mode. They all point to the same structural problem.

  • Incident 01 — IDMerit data exposure (1B records). Identity verification company left ~1 billion records unprotected on the open internet. Names, addresses, birth dates, national ID numbers, phone numbers across 26 countries. 203 million U.S. records alone.
  • Incident 02 — ABN AMRO deepfake bypass (46 accounts). One individual opened 46 bank accounts in other people’s names using deepfake technology to bypass face recognition. Prosecutors presented the case in Amsterdam court, March 2026.
  • Incident 03 — Bithumb KYC failures (6.65M violations). South Korea’s FIU found 6.65 million KYC violations at Bithumb. ₩36.8 billion in fines, six-month partial suspension, new registrations blocked through September 2026.
  • Incident 04 — Persona frontend exposed (2,456 files). Researchers found a publicly exposed frontend belonging to Persona — Discord’s verification vendor — on a U.S. government server. Faces, government IDs, device fingerprints retained up to three years.

The verification layer is the honeypot

The industry built identity verification as a checkpoint. You pass through it once — at signup — and the platform stores the data it collected to prove compliance. Your government ID. Your selfie. Your biometric template. Sometimes for years.

That creates a paradox: the system designed to protect against identity fraud becomes the single richest target for identity theft. Every KYC database is a honeypot. Every verification vendor that stores your face, your ID, and your address is one misconfigured server away from exposing everything about you.

IDMerit proved this at scale. One billion records. Not because they were hacked — because they left a database unprotected. Researchers noted that automated bots constantly scan the internet for exposed databases and can copy them within minutes.

Persona proved something arguably worse: even when the verification vendor is technically sophisticated, the data collection itself is the problem. Persona collects IP addresses, browser fingerprints, government ID numbers, phone numbers, names, faces, and a battery of biometric analytics — and retains it for up to three years. When the frontend was exposed, all of that was accessible.

The system designed to prove who you are is the system most likely to expose who you are.

Deepfakes are beating the checks

While verification vendors are leaking data, the verification itself is being defeated. The ABN AMRO case is the clearest example: one person, 46 fake bank accounts, all opened by presenting deepfake video to a face recognition system that was supposed to confirm the person on camera matched the person on the ID.

This isn’t theoretical. It happened in court. Prosecutors presented it as fact. The face recognition system that ABN AMRO relied on to verify identity — the same category of system that every major exchange uses at onboarding — was beaten 46 times by the same attacker.

And that was onboarding verification — the step the industry has spent the most money and effort on. If deepfakes can beat the strongest checkpoint, what chance does a session token from 14 hours ago have at the moment someone initiates a withdrawal?

Storage is the vulnerability. Verification isn’t.

There’s a difference between verifying that a human is present and storing everything about that human. The industry conflated the two. KYC requires both verification and record-keeping. But transaction-layer verification — confirming the human at the moment of action — doesn’t need to store faces, IDs, or biometric templates at all.

If you never store the data, you can never leak the data. Lorica captures a live face, extracts a high-dimensional embedding, encrypts it at rest with authenticated encryption (AES-based, with rotation support), runs liveness and anti-deepfake analysis, and returns a signed JWT. The photo is discarded immediately. There is no database of faces to expose. There is no honeypot to breach.

What the JWT actually contains

The proof Lorica returns isn’t a stored record — it’s a cryptographic attestation that expires.

POST /verify response — 292ms warm
user_id:            "usr_8f3a2b1c"            ← who was verified
action_verified:    "withdraw_50000_usdc"     ← what they authorized
confidence:         0.97                      ← face match confidence
liveness_score:     0.99                      ← liveness confidence
deepfake_score:     0.002                     ← 0.2% deepfake probability
injection_detected: false                     ← no synthetic injection
verified_at:        "2026-04-13T02:47:12Z"    ← exact timestamp
jwt_expires:        60                        ← seconds until expiry

No face stored. No ID stored. No biometric template retained. Just a signed, time-limited proof that a living human with the right face authorized a specific action at a specific time. If someone breaches your systems, they find expired JWTs and encrypted embeddings that can’t reconstruct a face. There’s nothing to sell on a dark web marketplace. Nothing to use for identity theft. Nothing.

Q1 2026 was a warning

A billion records leaked by a verification vendor. Deepfakes beating face recognition at a major bank. Millions of KYC violations at one of the world’s largest exchanges. A verification vendor’s biometric data found exposed on a government server.

These aren’t edge cases. They’re the logical outcome of an architecture that collects everything, stores everything, and verifies once.

The alternative isn’t to stop verifying. It’s to stop storing. Verify the human at the moment that matters — when money moves — and discard everything except the cryptographic proof that they were there.

Express interest — see transaction-layer verification that never stores a face.