In the next twelve months, AI agents will execute trades, move money, and manage portfolios with increasing autonomy. Every major financial infrastructure company is building toward this. The agent sees an arbitrage opportunity, calculates the risk, and executes the trade — faster than any human could.

This is coming. The technical capability is nearly there. But there’s a structural problem nobody is solving: when an AI agent executes a financial action on behalf of a human, where is the proof that the human authorized it?

Not that the human set up the agent. Not that the human configured the parameters last week. That a specific human approved a specific action in a verifiable, auditable, independently confirmable way.

Build the proof layer before you need it. Not after the first dispute you can’t resolve.

The authorization gap widens

We’ve been writing about the gap between authentication and action for weeks now. A user logs in at 9 AM. A transaction happens at 2 PM. Five hours of blind trust. That gap already causes problems.

AI agents blow that gap wide open.

With an autonomous agent, the human might configure the strategy on Monday. The agent executes 200 trades between Tuesday and Friday. The human never sees any individual trade before it executes. The “gap” isn’t five hours — it’s five days. Or five weeks. Or the lifetime of the agent’s deployment.

When a regulator asks “who authorized this $3M trade at 2:47 AM on Thursday?” the answer is… what? The agent did it. The human set up the agent. But did the human authorize this specific trade?

Current infrastructure has no answer.

Three models of human-agent authorization

As I see it, there are three ways the industry could handle this. Only one actually works.

Model 1: Pre-authorization. The human authorizes a set of parameters upfront — trade within these limits, rebalance within this range, don’t exceed this exposure. The agent operates within those bounds. This is what most agent frameworks are building today.

The problem: pre-authorization is a policy, not proof. It says “the human approved the strategy.” It doesn’t say “the human approved this action.” When the trade goes wrong and the customer disputes it, the platform has a configuration file and a timestamp. That’s not evidence of authorization. That’s evidence of intent at a prior point in time.

Model 2: Post-notification. The agent executes, then notifies the human. If the human doesn’t object within a window, the action is considered authorized. This is how most brokerage auto-trading works today.

The problem: silence is not authorization. “We sent a push notification and they didn’t respond” is a weak legal position. The notification could have been missed, suppressed, or ignored. It creates an implicit consent model that regulators are increasingly uncomfortable with for high-value transactions.

Model 3: Step-up verification at the moment of action. The agent identifies a trade. Before execution, it triggers a verification check. The human confirms with a biometric step-up — a face scan at 292ms warm median. A signed JWT is produced proving the human authorized this specific action at this specific time. The agent proceeds with the JWT attached to the transaction.

Pre-authorization proves intent at setup. Post-notification assumes consent by silence. Step-up verification proves a human is present at the moment the money moves.

What the flow looks like

The entire human interaction is step 4. Under two seconds. The human sees what the agent wants to do, confirms with their face, and the agent proceeds. If the human doesn’t confirm within the timeout, the agent drops the trade and moves on to the next opportunity.

The JWT that comes out of step 5 carries everything an auditor, a regulator, or a dispute resolution panel would need. Human identity. Action context. Timestamp. Liveness method. Confidence score. Independently verifiable signature.

Threshold-based step-up

The obvious objection: if an agent executes 200 trades a day, you can’t ask the human to approve all 200. Correct. You don’t need to.

The model that works is threshold-based step-up. The agent operates autonomously within pre-approved parameters. Small trades, routine rebalancing, limit orders within defined ranges — these execute without human intervention. The agent’s pre-authorization policy covers them.

But when a trade crosses a threshold — amount, counterparty risk, new venue, unusual timing, exposure concentration — the agent triggers a step-up verification. The human confirms. The JWT is attached. The trade executes with cryptographic proof of human authorization.

The threshold is configurable. A $50K per-trade limit for most operations. A $500K daily aggregate limit. A new-venue flag. Whatever the platform’s risk model requires. The point is that high-risk actions get human proof and low-risk actions get autonomous execution. Both models coexist.

// Agent decision logic (pseudocode)
if trade.amount > policy.step_up_threshold
   or trade.venue not in policy.approved_venues
   or daily_aggregate > policy.daily_limit:

    // Trigger human verification
    jwt = lorica.verify({
        user_id: agent.owner_id,
        action: `${trade.type}_${trade.amount}_${trade.venue}`,
        mode: "passive"
    })

    if jwt.match and jwt.confidence > 0.90:
        execute_trade(trade, proof: jwt.token)
    else:
        log_skipped(trade, reason: "human_verification_failed")

else:
    // Within policy bounds — execute autonomously
    execute_trade(trade, proof: policy.pre_auth_token)

Why this matters now

The agent infrastructure is being built right now. Frameworks, orchestration layers, tool-use protocols, custody integrations — all moving fast. But the authorization layer is being treated as an afterthought. “We’ll figure out compliance later” is the default posture.

That’s how the industry ended up with wallet signatures as the proof of authorization for crypto transactions. Nobody designed a human-level authorization system because the key-level system was good enough at the time. Now institutions need human-level proof and the architecture doesn’t support it.

The same thing is about to happen with AI agents. If the authorization layer isn’t built into the agent infrastructure from the start, we’ll end up with another gap — agents executing billions of dollars in transactions with no verifiable proof that any human approved any of it.

The human-in-the-loop layer

This is what Lorica becomes in an agent-driven financial system: the human-in-the-loop layer. Not a bottleneck. Not a gate on every transaction. A verification checkpoint that activates at the moments that matter — when risk crosses a threshold, when an action falls outside established parameters, when a regulator would ask “who approved this?”

The JWT proves the human said yes. Not the agent. Not the policy file. Not the absence of a “no.” A live human, biometrically confirmed, at the moment the decision was made.

The agents are coming. The authorization proof needs to come with them.

Build the proof layer before you need it. Not after the first dispute you can’t resolve.