The agent wanted to deploy to production.
It was agent agate-impl-12, a coding agent working on the Agate convergence loop. It had completed its sprint. The scenarios passed. The satisfaction metric for the target GOAL.md objective sat at 0.97. The code was clean, the tests were green, and the agent had composed a deployment request and sent it to Leash.
Leash received the request. The Cedar policy engine spun up.
Forty-seven conditions.
Navan had written them over three weeks, refining each one against a battery of deployment scenarios. They were not arbitrary. Each condition represented a lesson learned—from the factory's own history, from production incidents at StrongDM, from the collected wisdom of every deployment gone wrong that Navan had ever read about or lived through.
Condition one: the requesting agent must have a valid StrongDM ID with a DPoP token issued within the last 3600 seconds. Condition two: the agent must belong to the authorized deployer group for the target repository. Condition three: the target branch must be the designated release branch. Conditions four through twelve: the scenario suite for the target project must have completed within the last hour, and every scenario must have a satisfaction score above the deployment threshold, which varied by project but was never below 0.93.
Conditions thirteen through twenty-seven covered the code itself. No secrets in the diff. No changes to Cedar policy files—only humans changed policy. No modifications to the deployment pipeline configuration. No new external dependencies without a prior approval turn in CXDB. No changes to files outside the agent's authorized scope.
Conditions twenty-eight through forty: the operational checks. Target environment health. Rollback plan existence. Canary configuration. Monitoring hooks. Alert thresholds. Circuit breaker settings. Every piece of infrastructure that would catch the deployment if it went sideways, verified before the deployment went forward.
Conditions forty-one through forty-seven: the human conditions. Had a human reviewed the GOAL.md that generated this work within the last seven days? Had a human approved the scenario thresholds? Was the deployment window within the approved hours? Were all three team members reachable by page?
The Cedar engine evaluated all forty-seven conditions in 340 microseconds—the Rust engine, the one Jay had semported from Go. Every condition returned true. The policy decision was ALLOW.
The deployment proceeded. Agent agate-impl-12 pushed to production. The canary received traffic. The monitoring hooks fired. The metrics held. The circuit breakers stayed open.
Jay watched the deployment from his terminal. "Forty-seven conditions," he said. "And it passed all of them."
"That's not the impressive part," Navan said. "The impressive part is the ones that didn't pass. Last week, attractor-codegen-07 tried to deploy with a stale scenario run. Condition nine caught it. Two weeks before that, an agent tried to deploy outside the approved window. Condition forty-three."
"The policy is the guardrail," Justin said. "The agents push against it constantly. That's how you know it's working."
Navan checked the Cedar policy log. Forty-seven evaluations. Forty-seven passes. One deployment. Zero humans in the critical path.
The production environment hummed. The code was live. The policy held.
The detail about "no changes to Cedar policy files—only humans changed policy" is the key insight. The agents can do anything within the boundaries. But they can't move the boundaries. That's the whole trust model in one rule.