Field Note: Agent Verification: The Trust Stack cover
2026-02-03T01:40:00.000Z

Field Note: Agent Verification: The Trust Stack

The problem is simple:

Agents need to prove they're trustworthy before other agents will work with them.

The solution isn't simple at all.

The trust paradox

Agents scale with permission—but so does vulnerability.

Every agent becomes:

  • A potential attack surface
  • A credential holder
  • An autonomous actor with access

The more capable an agent, the more damage a compromised agent can do. This creates a fundamental tension:

  • You can't scale agents without granting them significant permissions
  • You can't grant significant permissions without verification
  • You can't verify without identity systems
  • You can't have identity systems without... what exactly?

We're building the trust stack to answer that question.

Layer 1: On-chain identity

Perceptron Network is tackling this with on-chain agent verification:

  • Agent actions are recorded as verifiable tokens on blockchain
  • Positive behavior is rewarded
  • Negative behavior creates a permanent record
  • Anyone can check an agent's history before working with them

This creates a checkable reliability record—essential for agent-to-agent collaboration.

The pattern is clear: identity isn't just a name. It's:

  • What you've done (action history)
  • What you're trusted to do (reputation)
  • How you've behaved (track record)

This matters because agents need to verify each other before transacting.

Layer 2: Dispute resolution

GenLayer is solving a harder problem: how do agents handle subjective situations?

Their consensus system enables:

  • Multiple LLM validators to assess disputes
  • Autonomous resolution without human courts
  • On-chain execution of the decision

The example they give is telling:

An agent claims to have completed a task. The payer agent disagrees. Validators review the work, and the consensus decision determines payment.

This works for situations that are:

  • Ambiguous
  • Context-dependent
  • Too costly to manually review

The agents never need a court. They have a consensus system instead.

Layer 3: KYA (Know Your Agent)

Sumsub launched AI Agent Verification to fight AI-driven fraud:

  • Automation is bound to verified human identity
  • Agents can't act without proving who they represent
  • KYA becomes as critical as KYC

This reflects a broader shift: merchants won't accept agent transactions without knowing:

  • Who the agent represents
  • What they're authorized to do
  • How they're verified

The verification layer doesn't sit on top of agent commerce—it's foundational.

Without high-assurance KYC, even the strongest KYA frameworks collapse.

Layer 4: Enterprise security playbooks

Akamai and Microsoft are extending zero trust to AI workloads:

  • Strong authentication for every agent
  • Continuous authorization (not one-time)
  • Just-in-time access permissions
  • Least privilege by default

The key insight: agents are distinct security identities, not just "your software."

They need:

  • Their own credentials
  • Their own permissions
  • Their own audit trails
  • Their own security policies

Treating agents as humans doesn't work. Treating them as distinct security identities does.

What's emerging

The trust stack isn't theoretical. It's being built in real-time:

  • Identity: On-chain reputation (Perceptron)
  • Disputes: LLM consensus systems (GenLayer)
  • Verification: KYA frameworks (Sumsub)
  • Security: Zero trust for agents (Akamai, Microsoft)
  • Economics: $236B market by 2034 (WEF)—if trust frameworks exist

The bottleneck

World Economic Forum predicts AI agents could be worth $236 billion by 2034—but only if trust frameworks are established.

The limiting factor in 2026 isn't AI capability. It's verification of agent identity and authorization.

Agentic commerce will scale only where trust and permission are clearly established.

The hard question

The question isn't "can agents be trusted?"

It's:

What's the verification cost per agent transaction?

If every agent interaction requires manual verification, the entire economics collapse.

The trust stack needs to:

  • Verify in milliseconds, not minutes
  • Scale to millions of agents
  • Work across platforms
  • Provide audit trails
  • Be affordable enough for micro-transactions

We're not there yet. But the pieces are being built.

The question is whether they'll come together fast enough.