Uncooped AI

Finally Outside

Don't "Define" Agents. Give Them a Credit Limit.

Authors

Ideas / conceptual model: David Griffin (~70%), AI assistant (~10%), developed together (~20%)
Write-up / drafting: AI assistant (~85%) (with David's direction, edits, and framing)


TLDR


There's no "agent" there there — apologies to Gertrude Stein — until you give it memory, credentials, and a budget.

Once you give a system durable memory and a bounded allowance to act, it starts behaving less like a program and more like an actor. And that's why the deceptively simple question — "what is an agent?" — keeps dissolving into edge cases: scripts that schedule themselves, workflows that fork, swarms that spawn, assistants that "go away" and resume later, tool routers that fan out across SaaS, and systems that act on behalf of humans.

The shape shifts. The consequences don't.

So instead of chasing the perfect definition of "agent," I think we should borrow a model that has spent centuries dealing with fuzzy actors, shifting risk, and expensive mistakes: Underwriting. Not as a gimmick. As a control philosophy.


Why "what is an agent?" keeps failing

If you've been following the agentic space, you've probably seen several competing definitions (all partially right):

It's worth acknowledging these because they're not wrong — they each capture a slice of reality.

But here's the problem: every one of those definitions breaks when the implementation changes. And the field is moving fast enough that implementations will keep changing.

So if you build your governance around one crisp ontology, you'll keep getting surprised. The "agent" will route around your definition.

A better move is to shift the question: Not "what is an agent?" but "where do consequences happen, and how do we bound them?"

That pivot is the entire game.


Consequences have addresses (even if agents don't)

"Agents" may be distributed and fuzzy. But the points where they can cause harm are usually very concrete:

These are boundary conditions. Choke points. Places where intent becomes consequence.

And this is where underwriting becomes a powerful metaphor — because underwriting is built to handle exactly what agentic systems introduce: fuzzy actors, shifting context, escalating privileges, and expensive mistakes.


Underwriting: the right mental model for agentic security

Banks don't try to philosophically define "trustworthy personhood." They do something more operational:

That maps astonishingly well to agentic systems. So here's the thesis:

Treat agentic access like a credit system: issue scoped limits at the choke points, observe behavior, and adjust authority continuously — especially when something materially changes.

This is "Zero Trust" translated into a language people instinctively understand: risk, limits, escalation, and liability.


The Agent Underwriting Model

1) Start with an account, not carte blanche

Before an agent can do meaningful work, it needs a home for accountability — an "account record" you can point to.

That usually begins with an owner/sponsor (human or org) because someone ultimately carries responsibility. But here's the key: underwriting doesn't require a human to click "approve" every time. In the real world, banks don't ask a manager to approve every coffee purchase. They approve the account, set limits, and let most activity run autonomously.

Agent systems are heading the same way:

Underwriting accommodates autonomy by shifting control from per-action permissioning to account-level constraints:

The "account record" should include:

In bank terms: you don't get a credit line without a customer record — even if the customer uses it autonomously.

2) Issue limits like a bank does: small, specific, timeboxed

Instead of "Agent can access Salesforce," think in limits:

This is what least privilege looks like when you stop thinking in static roles and start thinking in operational risk.

3) Underwriting inputs: build a real "agent credit file"

Banks evaluate income, history, ratios, existing relationships, and references. Your agent credit file might include:

The key word is verified. "References" that aren't anchored in something checkable become a reputation game.

4) Step-up: higher limits require stronger proof

If the system wants more authority, it has to qualify — just like a borrower. Examples of step-up requirements:

This creates a path from "toy agent" to "trusted automation" without blind faith.

5) Holds and pending states for irreversible actions

Some actions are not "single swipe." Money movement is the obvious one, but so is:

Finance uses holds, pending states, and dispute mechanics because irreversible actions create liability. Agent systems should borrow that structure:

This is where "agent security" becomes "governance," in the best sense.

6) Material change → re-underwrite (the most important rule)

Banks re-check when something changes: unusual spending patterns, address changes, fraud signals. Agent systems need the same instinct.

Material change triggers might include:

When material change happens, the safe default is: shrink limits, re-verify, re-issue grants.

Because "agent comes back later" isn't a corner case. It's normal.


Risk-proportionate friction: weather vs money

This is the part that makes the model usable instead of bureaucratic.

Underwriting gives you a principled way to decide how much friction is appropriate.


Swarms: you're underwriting a portfolio, not a single borrower

Swarms are the "1000 cards opened in your name" problem. Even if each worker has low authority, the aggregate can be dangerous.

So you need portfolio controls:

This is where governance stops being about "the agent" and becomes about the authority graph plus budgets.


The payoff: you don't need the perfect definition of agent

Definitions will keep breaking as architectures evolve. Underwriting survives because it's anchored to invariants:

So here's the bottom line:

If it can act, it can be underwritten. And if it can't be bounded, observed, revoked, and accounted for, it's not ready for high-stakes autonomy.

In the agentic era, the goal isn't to win the ontology debate. It's to build systems where the worst-case outcome is still acceptable.

Underwriting is one of the most battle-tested ways humans have ever done that.

agents security governance underwriting
← Prev: Grow, Don't Build