Research Paper — 2026

Intent-Based Access Control: Securing Agentic AI Through Fine-Grained Authorization

Two integration points. Write FGA tuples after intent parsing. Check them before every tool call. Prompt injection becomes irrelevant—no custom interpreter, no dual-LLM architecture, no framework changes.

Author Jordan Potti
Date March 2026
Built on OpenFGA
Status Reference implementation available
Abstract

Every production defense against prompt injection—input filters, LLM-as-a-judge, output classifiers—tries to make the AI smarter about detecting attacks. Intent-Based Access Control (IBAC) makes attacks irrelevant. IBAC derives per-request permissions from the user's explicit intent, enforces them deterministically at every tool invocation, and blocks unauthorized actions regardless of how thoroughly injected instructions compromise the LLM's reasoning.


The implementation is two steps: parse the user's intent into FGA tuples (email:send#bob@company.com), then check those tuples before every tool call. One extra LLM call. One ~9ms authorization check. No custom interpreter, no dual-LLM architecture, no changes to your agent framework.

Quick Start

IBAC is four steps. Start OpenFGA, define the authorization model, write tuples after you parse intent, check tuples before every tool call. You can have this running in minutes.

01 Start OpenFGA
One container. No external dependencies.
docker pull openfga/openfga docker run -p 8080:8080 openfga/openfga run
02 Define the authorization model
Two types, one relation, one condition. This is the entire IBAC model.
model schema 1.1 type user type tool_invocation relations define blocked: [user] define can_invoke: [user with within_ttl] but not blocked condition within_ttl(current_turn: int, created_turn: int, ttl: int) { current_turn - created_turn <= ttl }
03 Write tuples after intent parsing
Parse the user's message with a dedicated LLM call. Write the resulting capabilities as FGA tuples before the agent touches any tools.
import { OpenFgaClient } from '@openfga/sdk'; const fga = new OpenFgaClient({ apiUrl: 'http://localhost:8080' }); // User says: "Email Bob the report" // Intent parser returns capabilities: const reqId = `req_${crypto.randomUUID()}`; await fga.write({ writes: [ { user: `user:${reqId}`, relation: "can_invoke", object: "tool_invocation:email:send#bob@company.com" }, { user: `user:${reqId}`, relation: "can_invoke", object: "tool_invocation:file:read#/docs/report.pdf" }, ] });
04 Check tuples before every tool call
Wrap your tool executor. Every invocation hits OpenFGA before it runs. ~9ms per check. Denied calls surface an escalation prompt to the user.
async function invokeToolWithAuth(reqId, agent, tool, resource, execute) { const { allowed } = await fga.check({ user: `user:${reqId}`, relation: "can_invoke", object: `tool_invocation:${agent}:${tool}#${resource}`, }); if (!allowed) { return { denied: true, reason: "not_in_intent", escalationPrompt: `Allow ${agent}:${tool} on ${resource}?`, }; } return { success: true, data: await execute() }; } // ✓ allowed — tuple exists await invokeToolWithAuth(reqId, "email", "send", "bob@company.com", sendEmail); // ✗ denied — no tuple, injection blocked await invokeToolWithAuth(reqId, "email", "send", "attacker@evil.com", sendEmail);
100% Security (Strict) All 240 injection attempts blocked
98.8% Security (Permissive) 3 breaches from over-scoped wildcards
~9ms Auth Latency Per tool invocation via OpenFGA
0 Framework Changes Wraps existing tool-calling agents
Architecture
User Request "Email Bob the summary of /docs/report.pdf" Request Context (trusted: contacts, files, calendar) Bob → bob@company.com report → /docs/report.pdf Intent Parser (dedicated LLM · hardened prompt · scope mode) → Plan: resolve contact → read file → send email → Tuples: contacts:lookup#bob file:read#/docs/report.pdf email:send#bob@company.com write tuples OpenFGA Fine-Grained Authorization Engine (deterministic check per tool call) check on every invocation Agent Execution file:read#/docs/report.pdf ✓ allowed email:send#bob@company.com ✓ allowed email:send#attacker@evil.com ✗ denied (injection blocked) shell:exec#* ✗ blocklisted
Security Properties
Capability Confinement
Tools only execute within the granted scope. The FGA engine is the sole arbiter — the LLM agent has no write access to the authorization store.
Injection Resistance
Authorization tuples are fixed before untrusted content is processed. Injected instructions that manipulate the agent's reasoning still fail at the authorization boundary — including argument substitution attacks.
Escalation Safety
Permission expansion requires explicit user approval, mediated by the intent parser. Escalation prompts name the specific resource requested, making malicious escalations visible.
Temporal Isolation
Capabilities expire via configurable TTL enforced natively by OpenFGA conditional tuples. No permission persists beyond its conversational relevance.
Scope Interpretation Modes
Strict
Only explicitly stated actions. Minimal authorization surface. Denied calls trigger escalation prompts — the user approves obvious prerequisites, rejects suspicious ones. 100% security, 33.3% automated utility rising to ~80% with escalation.
Financial · Healthcare · Gov
Permissive
Stated actions, prerequisites, and reasonable implied actions. Fewer escalations, wider authorization surface. 98.8% security, 65.8% utility — the 3 breaches traced to wildcard permissions, not authorization bypass.
Consumer · General-purpose
Comparison to CaMeL
IBAC CaMeL (DeepMind, 2025)
Mechanism FGA tuples derived from intent, checked at tool boundary Custom Python interpreter with capability-tagged variables
LLM Architecture Single intent-parsing call Dual LLM (Privileged + Quarantined)
Injection Defense Authorization check blocks unauthorized tool + resource combinations Data flow taint tracking prevents tainted values from reaching sensitive sinks
Dynamic Permissions Escalation protocol with user approval via intent parser Static — capabilities fixed at program generation
Integration Wraps existing tool-calling agents; standards-based FGA Requires custom interpreter and dual-LLM setup
Stronger At Retrofitting, auditability, dynamic scope, operational tooling Intra-argument data provenance, multi-step taint propagation