[ARCHITECTURE & CONCEPTS]

[

2/24/26

]

What OAuth Did for Delegation, AAB Does for Agent Execution

[Author]:

Amjad Fatmi

In 2006, if you wanted a third-party application to access your Twitter account, Twitter asked for your password.

Not a token. Not a scoped permission grant. Your actual account password. The application stored it. It used it whenever it needed to. It had the same rights you did. There was no way to revoke access without changing your password everywhere. No way to limit what it could do. No record of what it did on your behalf.

This was not considered a flaw at the time. It was considered how API access worked.

Then Blaine Cook, working on Twitter's OpenID implementation in November 2006, sat down with Chris Messina and others and asked a question that seems obvious in retrospect: why is there no open standard for API access delegation that does not require sharing credentials?

After reviewing existing industry practices, they concluded there was none. They built one. And once it existed, the old approach became indefensible.

What follows is a history of three moments where the same thing happened at three different layers of the infrastructure stack. A structural gap existed. Everyone worked around it. Someone named and formalized it. The standard won. Looking back, it became impossible to imagine operating without it.

The fourth moment is underway now, at the agent execution layer.

OAuth: Naming the Delegation Gap

The problem OAuth solved had a precise shape. It was not authentication. It was not authorization in the broad sense. It was specifically this: how does a user grant a third-party application scoped, revocable access to a resource on their behalf, without sharing the credential that would give that application unlimited access?

Before OAuth, the answer was: they don't. The application gets the credential. The application has whatever that credential allows, for as long as the application holds it. Revocation means changing the password and notifying every application that had it.

The OAuth working group formed in April 2007. The initial specification was drafted in July 2007. The OAuth Core 1.0 final draft was released on October 3, 2007. Within months, libraries existed for PHP, Rails, Python, .NET, and Perl. In 2008, Google adopted the protocol. By 2010, Twitter required all third-party applications to use OAuth rather than password-based access.

OAuth 2.0, published as RFC 6749 in October 2012, simplified the model. Facebook, Google, and Microsoft all adopted it. The "Login with" button became ubiquitous. The protocol became infrastructure.

Eran Hammer, lead author of the OAuth 2.0 specification, resigned from the IETF working group in July 2012, calling the result more complex and less interoperable than it should have been. He was not wrong about some of the tradeoffs. He was wrong about the outcome. OAuth 2.0 is now the foundation of API authorization across the internet, the basis of OpenID Connect, and the mechanism that billions of authentication flows traverse daily.

The principle OAuth established is what matters here: applications should not hold credentials. They should receive scoped, revocable tokens issued by an authorization server that mediates access. The credential stays with the identity provider. The application gets only the access it was explicitly granted, for the purpose it was granted, revocable without touching the underlying credential.

This principle was described as overhead when it was introduced. Then it was everywhere. Then it became unthinkable to build without it.


TLS: Naming the Transport Gap

Netscape developed SSL in 1994 and 1995, initially for its Navigator browser. SSL 3.0 was released in 1996. TLS 1.0, a refinement of SSL 3.0 under an IETF standard name to distinguish it from Netscape's proprietary protocol, was published as RFC 2246 in January 1999.

For most of the following decade, the web ran unencrypted. HTTP traffic was plaintext. Every request, every session cookie, every form submission, every password entry traversed networks in a form that anyone with network access could read. SSL and TLS existed. They were available. They were not used for most web traffic because the argument against them was familiar and plausible: latency overhead, certificate cost, configuration complexity, and the claim that encryption was only necessary for sites handling sensitive data.

News sites did not need it. Blogs did not need it. The performance cost was not worth it.

In August 2014, Google announced HTTPS as a ranking signal in search. This was described at the time as a minor change affecting less than one percent of global search results. It was not minor in its consequences. Engineers who had been told for years that HTTPS was overhead for their site now had a business reason to care. In 2016, more than half of all web pages loaded over HTTPS. Google Chrome 68, released in July 2018, marked every HTTP site as "Not Secure" in the address bar. By 2019, HTTPS accounted for more traffic than plain HTTP. Today, over 95 percent of web traffic is encrypted.

The argument against HTTPS in 2007 was that encryption was overhead for sites not handling sensitive data. That argument looks absurd from 2026. The POODLE vulnerability in 2014, Heartbleed the same year, and a decade of documented credential theft and session hijacking made the cost of not encrypting visible rather than theoretical.

The transition from optional to mandatory took roughly twenty years from standardization to ubiquity. What ended it was not a technical breakthrough. TLS worked in 1999. What ended it was a combination of browser pressure, search ranking signals, Let's Encrypt eliminating the certificate cost barrier in 2016, and enough documented harm to make the overhead argument impossible to sustain.

Kubernetes Admission Controllers: Naming the Pre-Execution Gap

Kubernetes authentication answers the question: who is making this request? RBAC authorization answers the question: is this requester allowed to perform this operation? When both checks pass, the request proceeds.

Neither of them answers a third question: does the specific thing being deployed meet the security standards we require before it runs?

A developer with permission to deploy pods could deploy a container running as root. A container pulling from an untrusted registry. A pod with access to the host network. A privileged container that can effectively escape isolation. All of these pass authentication. All pass authorization. All represent security decisions that the identity and permission checks are structurally incapable of catching, because those checks govern who can request what operations, not whether specific workload configurations are acceptable.

The Kubernetes documentation describes admission controllers precisely: "An admission controller is a piece of code that intercepts requests to the Kubernetes API server prior to persistence of the resource, but after the request is authenticated and authorized."

Prior to persistence. After authentication and authorization. A third gate, operating on the content of what is being deployed, evaluated before it executes.

The behavior is fail-closed by design. Datadog's security research documents it plainly: "Some form of admission control is typically a security requirement for production clusters that allow users access to create new workloads. Without it, it's possible for users to create pods that allow for full access to underlying cluster nodes." Sysdig's assessment is equally direct: "you'll very likely want to enable at least the default admission controllers for every Kubernetes cluster that you intend to use for any type of production workload."

There was a period when admission controllers were optional. That period is over. They are now considered a prerequisite for production Kubernetes operation. OPA Gatekeeper, Kyverno, and similar policy engines have built ecosystems around this gate. The question is no longer whether to use them. It is which policy framework to use.

The Structural Pattern Across All Three

Three infrastructure layers. Three structural gaps. Three standards that named them. One principle that applies at each layer.

At the identity layer, OAuth established that credential sharing is not delegation. Delegation requires a mediating authority that issues scoped, revocable tokens tied to specific grants of access.

At the transport layer, TLS established that encryption is not a feature for sensitive data. It is a property that all communication should have, because the cost of not having it compounds across every unprotected interaction.

At the infrastructure layer, admission controllers established that authentication and authorization are not sufficient to govern what runs. A third evaluation, operating on the specific parameters of what is being deployed, is required before execution.

Each gap, once named and standardized, looks obvious in retrospect. Each was described as overhead before it was described as infrastructure. Each became a baseline requirement for production operation after the combination of documented harm, regulatory pressure, and a low-friction adoption path made the alternative untenable.

The Agent Execution Gap

When an agent decides to take an action, the current stack checks several things.

The agent has an API key for the service it is calling. Authentication passes. The agent's process runs with IAM permissions that cover the service's API. Authorization passes. The agent's system prompt includes behavioral instructions about what it should and should not do.

What the stack does not evaluate: whether this specific action, with these specific parameters, at this specific moment, is within the defined scope of what this agent is authorized to execute on behalf of the organization that deployed it.

Authentication covers identity. IAM covers broad capability. Behavioral instructions guide the model's reasoning. None of these is a pre-execution authorization decision on the specific action.

This is the delegation gap OAuth filled: applications should not hold credentials, they should receive scoped, time-limited access to specific operations. The agent version: agents should not hold credentials, and their actions should not be governed by instructions to a probabilistic model. They should be governed by a policy that is evaluated deterministically before each execution.

This is the transport gap TLS filled: security is not a feature for sensitive transactions, it is a property that all execution should have, because the cost of unprotected execution compounds across every action an agent takes autonomously.

This is the pre-execution gap admission controllers filled: identity and permission are not sufficient to govern what runs. A third evaluation, operating on the specific parameters of what the agent is about to do, is required before it does it.

The Faramesh Core Specification frames this precisely: "Inference produces information, whereas execution produces consequences, and current frameworks collapse this distinction by treating proposal and execution as a single step."

Proposal and execution are different. OAuth separated credential holding from scoped access grants. TLS separated communication from unprotected communication. Admission controllers separated permission to deploy from permission to deploy specific configurations. The AAB separates the agent's proposed action from the authorized execution of that action.

The Specific Mechanism

A standard succeeds when it is precise enough to be implemented consistently and simple enough to be adopted widely.

OAuth succeeded because it defined a small number of flows covering the common cases, specified the token model unambiguously, and made libraries available for every major language before the standard was formally adopted. The implementation path was lower friction than the bad alternative.

TLS succeeded because the protocol was defined with enough precision that implementations on different sides of a connection could interoperate without shared code. Let's Encrypt, launched in 2016, eliminated the cost and complexity barrier that had allowed the overhead argument to persist.

Admission controllers succeeded because they defined a webhook interface with a clear request-response structure that any policy engine could implement against. OPA Gatekeeper and Kyverno built on that interface. The ecosystem followed the standard.

The Faramesh Core Specification defines the equivalent for agent action authorization: the Canonical Action Representation, which normalizes agent outputs into a deterministically hashable form for policy evaluation; the Decision Provenance Record, which captures every authorization decision with cryptographic integrity tied to the policy version that produced it; the policy program format, which specifies how rules are expressed and evaluated; the gate API, which defines how agent frameworks interact with the authorization layer; and the replay semantics, which specify how historical decisions can be re-evaluated deterministically without re-executing the underlying actions.

These are the components of a standard, not a product. An agent framework implementing execution authorization does not need to use Faramesh's server. It needs a gate that evaluates Canonical Action Representation-normalized outputs against a policy program and produces Decision Provenance Record-formatted records. The specification defines what that gate must do. Faramesh is a reference implementation of that specification, in the same way OpenSSL is a reference implementation of TLS, and OPA Gatekeeper is a reference implementation of Kubernetes admission policy.

The standard is the durable thing. The implementation is what you use before the standard wins.

Where We Are in the Adoption Curve

OAuth Core 1.0 was released in October 2007. Twitter forced third-party applications onto it by 2010. Facebook's Graph API supported only OAuth 2.0, making it the de facto standard for social API access by 2012. The gap from initial specification to ubiquity was roughly five years, driven by platform mandates from organizations whose adoption made resistance impractical.

TLS 1.0 was specified in January 1999. HTTPS surpassed plain HTTP traffic around 2019. The twenty-year gap was sustained by the absence of cost pressure, the overhead argument, and the lack of a free certificate authority. Let's Encrypt eliminated the cost barrier in 2016. Browser pressure and search ranking signals did the rest.

Kubernetes admission controllers were present in early Kubernetes versions and became production requirements as Kubernetes matured into the standard container orchestration platform, roughly between 2017 and 2022.

The Faramesh Core Specification was published in early 2025 at arXiv 2601.17744. The conditions that compressed the TLS adoption timeline from twenty years to five are already present at the agent layer. The EU AI Act requires high-risk AI systems to maintain access logs and demonstrate human oversight, with penalty exposure up to 7% of global annual turnover. Colorado's AI Act took effect in June 2026, requiring documented risk management programs for high-risk AI. The Air Canada and Mobley v. Workday cases established that organizations are responsible for what their autonomous systems do on their behalf. Cyber insurance carriers are adding AI-specific requirements to coverage conditions. OWASP has classified prompt injection, the primary attack against unprotected agent execution, as a documented threat category against which model-layer defenses provide probabilistic rather than deterministic protection.

These are not incremental pressures that can be waited out. They are regulatory deadlines, decided legal cases, and documented attack classes. The conditions for rapid adoption exist now. The question is not whether execution authorization becomes standard. The question is whether the organizations deploying agents in production today build on the right side of that transition.

The Argument That Doesn't Hold

The argument against OAuth in 2007 was that credential sharing was how API access worked and adding a mediation layer was overhead.

The argument against TLS for non-sensitive sites was that encryption had performance costs and was only necessary for transactions involving sensitive data.

The argument against admission controllers was that RBAC was sufficient to govern what ran in a cluster and adding a policy evaluation layer added operational complexity.

Each of these arguments was reasonable at the time and wrong about the outcome.

The argument against agent execution authorization today is that behavioral instructions in system prompts govern agent behavior, that observability and monitoring provide sufficient oversight, and that adding a pre-execution gate adds latency and integration complexity.

The Faramesh Core Specification benchmarks show 2.24ms median decision latency and 7,800 authorized actions per minute with zero double-executions across one million test requests. The latency argument does not hold.

The observability argument is a version of the argument that logging replaces encryption. Logging records what happened after it happened. Encryption prevents unauthorized access before it happens. Observability records what an agent did after it did it. An execution boundary prevents unauthorized actions before they execute. These are different properties. One cannot substitute for the other.

The behavioral instruction argument is the credential-sharing argument restated: trusting the model to follow instructions is trusting the third-party application to use your password responsibly. It works until it doesn't. When it doesn't, you have no record of what was authorized and no mechanism that could have stopped it.

The valet key analogy, used in the original OAuth documentation written in September 2007 by Eran Hammer, describes the problem directly: a valet key gives limited access to a car while using your regular key to unlock everything. The idea is clever. You give someone limited access with a special key, while using your regular key to unlock everything.

An agent with a system prompt instruction not to exceed refund thresholds does not have a valet key. It has your regular key and a note asking it to be careful.

An agent governed by a policy evaluated deterministically before execution, with scoped credentials injected ephemerally for specific authorized actions and revoked immediately after, has a valet key.

The infrastructure stack has had OAuth since 2007. It has had TLS since 1999. It has had admission controllers since Kubernetes matured. The agent execution layer is where the valet key does not yet exist as standard infrastructure.

It will. The question is how long the gap remains open and what happens inside it in the meantime.

The Faramesh Core Specification, including the formal definitions of the Canonical Action Representation, Decision Provenance Record, and Action Authorization Boundary, is available at arxiv.org/pdf/2601.17744. The open-source reference implementation is at github.com/faramesh/faramesh-core. The managed platform is at faramesh.dev.

Next

More

Next

More

[GET STARTED IN MINUTES]

Ready to give Faramesh a try?

The execution boundary your agents are missing.
Start free. No credit card required.

[GET STARTED IN MINUTES]

Ready to give Faramesh a try?

The execution boundary your agents are missing.
Start free. No credit card required.