Two integration points. Write FGA tuples after intent parsing. Check them before every tool call. Prompt injection becomes irrelevant—no custom interpreter, no dual-LLM architecture, no framework changes.
Every production defense against prompt injection—input filters, LLM-as-a-judge, output classifiers—tries to make the AI smarter about detecting attacks. Intent-Based Access Control (IBAC) makes attacks irrelevant. IBAC derives per-request permissions from the user's explicit intent, enforces them deterministically at every tool invocation, and blocks unauthorized actions regardless of how thoroughly injected instructions compromise the LLM's reasoning.
The implementation is two steps: parse the user's intent into FGA tuples (email:send#bob@company.com), then check those tuples before every tool call. One extra LLM call. One ~9ms authorization check. No custom interpreter, no dual-LLM architecture, no changes to your agent framework.
IBAC is four steps. Start OpenFGA, define the authorization model, write tuples after you parse intent, check tuples before every tool call. You can have this running in minutes.
| IBAC | CaMeL (DeepMind, 2025) | |
|---|---|---|
| Mechanism | FGA tuples derived from intent, checked at tool boundary | Custom Python interpreter with capability-tagged variables |
| LLM Architecture | Single intent-parsing call | Dual LLM (Privileged + Quarantined) |
| Injection Defense | Authorization check blocks unauthorized tool + resource combinations | Data flow taint tracking prevents tainted values from reaching sensitive sinks |
| Dynamic Permissions | Escalation protocol with user approval via intent parser | Static — capabilities fixed at program generation |
| Integration | Wraps existing tool-calling agents; standards-based FGA | Requires custom interpreter and dual-LLM setup |
| Stronger At | Retrofitting, auditability, dynamic scope, operational tooling | Intra-argument data provenance, multi-step taint propagation |