When Programmable Money Meets Programmable Identity
Why builders get giddy and lawyers reach for antacids.
As promised, here is the second part expanding on my previous post around Programmable Money. And this is one post that will get Dave Birch excited. The short version is:
Identity is the Missing Half
Programmable money on its own is powerful—but incomplete. It can execute logic, enforce conditions, and move at machine speed. What it can’t do, by itself, is understand who is involved, in what role, with what rights, and under which constraints. That’s where programmable identity enters the scene. And once the two collide, finance stops being transactional and starts becoming contextual.
Identity used to be a static artifact. A name. A number. A document you photocopied badly at a branch office. In a programmable world, identity becomes dynamic, composable, and—crucially—machine-readable. It’s no longer just “who you are,” but what you’re allowed to do, on whose behalf, under which rules, right now.
Money executing logic without identity is automation.
Money executing logic with identity is governance.
From “Know Your Customer” to “Know the Context”
Traditional identity checks answer a narrow question: Is this person who they say they are? Programmable identity answers a far richer one: What is this entity allowed to do in this moment, in this transaction, for this purpose?
That shift matters because financial risk doesn’t live in static identities. It lives in context.
With programmable identity, money can respond to:
Role (owner, employee, agent, delegate)
Authority (limits, approvals, scopes)
Intent (personal spend, business expense, delegated action)
State (verified, suspended, expired, revoked)
At that point, compliance stops being a gate at the front door and becomes a continuous signal woven into every transaction.
Delegation Is Where Things Get Real
The collision gets interesting—and uncomfortable—when delegation enters the picture. Humans increasingly act through proxies: employees, platforms, software, and now AI agents. Programmable identity makes that delegation explicit, scoped, and enforceable.
Instead of sharing credentials or relying on trust-by-association, delegation becomes codified:
This agent may spend up to X
Only for category Y
On behalf of entity Z
Until time T
With full auditability
This is catnip for builders. It’s also a nightmare for anyone who built compliance around vague accountability and after-the-fact reviews.
Agents Don’t Just Need Wallets—They Need Identity
When AI agents start initiating transactions, identity becomes existential. An agent without identity is a bot. An agent with identity is a legal, financial, and ethical actor—whether we’re ready to admit it or not.
Programmable identity gives agents:
A verifiable representation
Explicit authority boundaries
Purpose-bound permissions
Revocation and expiry
Suddenly, “my agent did it” isn’t a punchline—it’s a governance problem.
Who is responsible when an agent overspends?
Who is liable when an agent negotiates poorly?
Who audits the intent behind an autonomous decision?
Lawyers feel the ground shift here. Builders feel opportunity.
Compliance Moves from Paper to Code
Most compliance regimes today are still document-driven. Forms, attestations, screenshots, PDFs uploaded into portals nobody enjoys using. Programmable identity flips compliance from paperwork into runtime enforcement.
Instead of checking after the fact:
Identity claims can be verified cryptographically
Permissions can be enforced transaction-by-transaction
Violations can prevent execution, not just trigger reports
Compliance becomes less about proving you did the right thing later, and more about being unable to do the wrong thing in the first place. That’s a profound shift—and not everyone is emotionally prepared for it.
Why Lawyers Get Nervous
This convergence makes lawyers nervous for good reasons. Programmable identity collapses ambiguity. It forces clarity around responsibility, authority, and intent. Gray areas—the natural habitat of legal frameworks—start disappearing.
What gets uncomfortable quickly:
Code enforcing policy without discretion
Automated decisions with legal consequences
Jurisdictional rules embedded into systems
Accountability shifting from humans to design choices
When rules execute automatically, who wrote the rule becomes more important than who clicked the button.
Why Builders Can’t Sit Still
For builders, this is intoxicating. Identity plus money means:
Fewer fraud vectors
Cleaner delegation models
Radically simpler user experiences
Trust embedded by default, not bolted on
It also means the ability to design systems where users don’t constantly re-prove themselves, re-authorize everything, or re-enter the same data endlessly. Identity becomes reusable. Consent becomes programmable. Trust becomes portable.
This is the foundation for:
Agent-native commerce
Continuous onboarding
Invisible compliance
User-owned rules
Power Shifts, Quietly
The uncomfortable truth is that programmable money plus programmable identity redistributes power. Away from institutions that relied on opacity and friction. Toward platforms that can encode trust, rules, and accountability directly into systems.
That’s why this collision matters more than any single rail, wallet, or standard. It’s not a feature. It’s a structural shift.
Money now knows who is acting.
Identity now knows what money can do.
And when those two systems start speaking the same language, finance stops being a maze of exceptions and starts becoming something closer to… intentional.
Builders are giddy because they see what’s possible.
Lawyers are nervous because they see what’s inevitable.
Both are right.







Aloha Seb,
This is fascinating ! Well done and I almost understand 😅! Another comment is I think the concept of intentionality needs to be reserved for conscious beings . Intention means that 1) some being is wanting or desires something, 2) is aware of different things to choose from, 3) has the ability and skills to acquire the same, 4) and performs an action that achieves that usually in the context of normal social practices.
See English philosopher Anscomb’s book on Intention or Peter Ossorio’s on Persons or on Human Behavior ( although a psychologist , he once developed one of the first search engines for the Air Force back in the late 60’s through his linguistics company in Boulder where he was also a professor). And maybe AI can mimic some of this but not necessarily ethically and consciously yet!