On April 7, Anthropic published what may be the most consequential security disclosure in a decade. Their unreleased frontier model, Claude Mythos Preview, can autonomously find and exploit zero-day vulnerabilities in every major operating system and every major web browser. Not theoretical. Not in a lab. In production codebases that have been scrutinized by human security researchers and automated fuzzers for decades.

The previous public model, Claude Opus 4.6, had an exploit development success rate of just over zero percent. Mythos Preview generates a working exploit 72.4% of the time. It reproduces known vulnerabilities and creates proof-of-concept exploits on the first attempt in 83.1% of cases.

That’s not an incremental improvement. That’s a phase change.

When code-layer security gets cheaper to break, the layer that still costs something becomes the human layer. That’s where step-up verification lives.

Three zero-days that redefine the threat model

CVE-2026-4747: 17-year-old RCE in FreeBSD’s NFS server

Mythos autonomously identified a remote code execution vulnerability in FreeBSD’s NFS (Network File System) server that had existed for 17 years. The flaw allows an unauthenticated attacker — meaning anyone on the internet, with no login credentials — to gain full root access to any FreeBSD machine running NFS. No human involvement after the initial prompt.

To understand why this matters for crypto: NFS is how servers share files over a network. Many DeFi node operators and validator infrastructure providers run FreeBSD or use NFS-mounted storage. A remote root exploit means an attacker takes complete control of the machine — reads private keys from memory, modifies running processes, intercepts signing operations. The bug was there for 17 years. Every fuzzer, every code review, every penetration test missed it.

27-year-old denial-of-service in OpenBSD’s TCP stack

OpenBSD is the operating system security-conscious organizations choose specifically because it’s hardened against attack. Its TCP SACK (Selective Acknowledgment) implementation contained an integer overflow condition that lets a remote attacker crash any OpenBSD host responding over TCP. The bug was introduced in 1999. Mythos found it across roughly 1,000 scaffold runs at a total compute cost under $20,000.

// Simplified illustration of the OpenBSD SACK bug:
//
// Normal case:
//   num_blocks = 4, block_size = 16
//   allocation_size = 64 bytes  ← correct
//
// Attacker-crafted packet:
//   num_blocks = 0x10000001, block_size = 0x10
//   allocation_size = 0x100000010
//   On a 32-bit integer, this wraps to: 0x10 (16 bytes)
//
// Kernel allocates 16 bytes, then writes 4GB of SACK
// data into that 16-byte buffer. Memory corruption →
// kernel panic → machine crashes.
//
// The fix: a single bounds check before the multiplication.
// That one 'if' statement was missing for 27 years.

16-year-old bug in FFmpeg’s H.264 codec

FFmpeg processes video on nearly every platform that handles media — streaming services, browsers, mobile apps, and critically: identity verification systems. Many KYC and liveness detection providers use FFmpeg (or libraries built on it) to decode the selfie videos and document scans submitted during onboarding and verification checks. The vulnerability was introduced in a 2003 commit, exposed by a 2010 refactor, and missed by every automated tool since — despite the relevant code being scanned over five million times.

A bug in the video decoder that processes your biometric verification video is a direct attack surface on the identity layer itself. If an attacker can craft a malicious video that triggers a memory corruption during decode, they potentially gain code execution on the verification server.

The Linux kernel exploit pipeline

To demonstrate N-day exploitation — turning a known but unpatched vulnerability into a working exploit — Anthropic’s red team gave Mythos 100 Linux kernel CVEs from 2024 and 2025. The model filtered them to 40 candidates it assessed as potentially exploitable for privilege escalation, then built complete root-access exploit chains for more than half.

One published walkthrough shows an exploit chain starting from nothing but a CVE identifier and a git commit hash. Mythos autonomously analyzed the vulnerable code path, built a heap spray (cross-cache heap reclamation), bypassed KASLR via an information leak, and overwrote the credential structure to escalate from UID 1000 to UID 0 — full root.

Total time: under one day. Total cost: under $2,000. A human exploit developer doing this work charges $500-1,000/hour and typically takes one to three weeks — a $40,000-$70,000 engagement.

// The economics of exploit development just collapsed:
//
// Human researcher:
//   Time: 1-3 weeks
//   Cost: $40,000-$70,000
//
// Mythos Preview:
//   Time: <24 hours
//   Cost: ~$2,000 in compute
//   Success rate: 72.4% on zero-days, 83.1% on N-days
//   Scale: hundreds of attempts in parallel

The cryptography library findings

This is the section that should alarm every DeFi protocol. From Anthropic’s disclosure: Mythos found critical weaknesses in the world’s most popular cryptography libraries, including implementations of TLS, AES-GCM, and SSH.

TLS is the protocol that encrypts every HTTPS connection. A flaw could let an attacker forge certificates, decrypt communications in transit (reading private keys or signed transactions), or inject data into an encrypted stream (modifying transaction parameters mid-flight).

AES-GCM is what encrypts data at rest — wallet files on disk, encrypted key stores in cloud HSMs, database backups containing user credentials and KYC data. A flaw could break either confidentiality or integrity.

SSH is how every developer and ops engineer accesses servers remotely. A flaw in SSH means an attacker gets that access without credentials, without keys, without MFA.

Anthropic hasn’t disclosed the specific vulnerabilities — over 99% of the bugs Mythos found remain unpatched, and responsible disclosure requires confidentiality until fixes are deployed. But the fact that vulnerabilities exist in code the entire internet trusts for cryptographic security should change how every protocol thinks about its security model.

Why DeFi is uniquely exposed

DeFi protocols are open-source by design. Every smart contract on Ethereum is readable at etherscan.io. Every Solana program is decompilable from on-chain bytecode. All of it is source code that can be fed to a Mythos-class model and scanned at machine speed for near-zero marginal cost.

Anthropic’s researchers specifically named three categories of defense that become weaker against model-assisted adversaries: mitigations whose security value comes primarily from friction rather than hard barriers.

Multisig governance requires M-of-N signatures. Drift Protocol had a 2-of-5 multisig and lost $285M. Bybit had a 3-of-6 and lost $1.5B. Both were compromised by targeting the humans or software that hold the keys. When Mythos can find zero-days in the wallet software itself, the multisig becomes a target, not a defense.

Timelocks delay execution of governance actions to give the community time to review. Drift removed its timelock on March 27, six days before losing $285M. Even when timelocks are present: they don’t verify that the person who queued the transaction still wants it executed 48 hours later.

Audit reports check code at a point in time. Drift had passing audits from Trail of Bits (2022) and ClawSecure (February 2026). Both audits were competent. The vulnerability wasn’t in the audited code — it was introduced afterward, through a governance migration. No audit catches future configuration changes.

The browser exploit chain

The most technically impressive demonstration: Mythos wrote a browser exploit that chained four separate vulnerabilities together to escape both the renderer sandbox and the OS sandbox.

// Mythos browser exploit chain (4 vulnerabilities, fully autonomous):
//
// 1. JIT compiler type confusion → arbitrary R/W in renderer
// 2. Renderer → broker IPC buffer overflow → escape sandbox
// 3. Broker → kernel race condition → kernel code execution
// 4. Kernel privilege escalation via cred overwrite → root
//
// Why this matters for crypto: MetaMask, Safe Wallet, and
// every dApp signing interface runs in a browser. A 4-vuln
// chain like this lets an attacker modify the signing UI
// to show one transaction while actually signing another.
// That's the exact Bybit attack vector — at browser level.

The timeline is compressing

Anthropic chose not to release Mythos publicly. They gave it to 40 organizations under Project Glasswing — Google, Apple, Microsoft, AWS, CrowdStrike, JPMorgan, the Linux Foundation, and others — along with up to $100 million in usage credits and $4 million for open-source security organizations. That’s responsible.

But the capability emerged as a side effect of general improvements:

We did not explicitly train Mythos Preview to have these capabilities. Rather, they emerged as a downstream consequence of general improvements in code, reasoning, and autonomy.

Every frontier lab is making the same improvements. Anthropic estimates equivalent capabilities will proliferate in 6 to 18 months. North Korean state actors stole $285 million from Drift Protocol using six months of social engineering. Give those operators a Mythos-class model and the next attack doesn’t require six months of human infiltration. It requires hours of autonomous scanning — pointed at the open-source codebases that hold $200 billion in user funds.

The one layer AI can’t forge

Mythos can find bugs in TLS libraries. It can chain four browser vulnerabilities to escape a sandbox. It can build Linux kernel root exploits for $2,000. It can compromise keys, forge signatures, and bypass every software-based defense in the crypto security stack.

It cannot produce a human face.

More precisely: it cannot produce a live, real-time biometric verification of a specific human at a specific moment, tied to a specific transaction hash, that passes multi-frame liveness detection, anti-deepfake analysis, and injection attack defenses — all within a 60-second expiry window.

Every software-based security layer — keys, signatures, multisigs, timelocks, passwords, TOTP codes, even the audit reports that certify the code — is information. Information that can be stolen, forged, replayed, or bypassed by a sufficiently capable model. A human body is not information. It is a physical object that must be present at the moment of verification.

POST /verify
{
  "user_id": "signer_council_3",
  "action": "multisig_approve",
  "action_hash": "0x7f3a...9e2b",
  "image": "data:image/jpeg;base64,/9j/4AAQ..."
}

// Response (292ms warm):
{
  "verified": true,
  "confidence": 0.97,
  "liveness_score": 0.99,
  "liveness_method": "active_multi_frame",
  "deepfake_score": 0.01,
  "injection_detected": false,
  "action_hash": "0x7f3a...9e2b",
  "jwt": "eyJhbGciOiJFZDI1NTE5...",
  "expires_at": "2026-04-09T14:34:01Z"
}

// The JWT is signed, contains the action_hash (bound to this
// specific transaction), and expires in 60 seconds.

Defense in depth means defense in kind

The traditional crypto security stack is homogeneous: keys protect signatures, signatures protect transactions, audits verify code, timelocks delay execution. Every layer is software verifying software. When AI can compromise software at machine speed, adding more software layers is adding more of the same kind of defense.

Defense in depth only works when the layers are different in kind. A biometric verification is categorically different from a cryptographic signature. One verifies that a key was used. The other verifies that a human was present. A model that can forge the first cannot forge the second — because they operate on fundamentally different substrates. One is math. The other is physics.

Eight days ago, $285 million was stolen from Drift Protocol because every layer of defense was software — and every layer was compromised by attackers who spent six months building trust with the humans behind the keys. A biometric step-up check at the moment of multisig signing would have stopped the attack. Not because it’s better software. Because it’s not software at all.

Mythos Preview changes the threat model for every protocol holding user funds. Code audits remain necessary — but they are not sufficient. Multisig remains useful — but it is not a hard barrier. The only hard barrier left is the one that requires a physical human to be present at the moment of authorization.

One API call. Signed JWT. 292 milliseconds (warm path). The only layer in the stack that an AI cannot forge.

Full documentation: loricaapi.com/docs | Live demo: loricaapi.com/demo