The prompt appeared in the terminal the first time Navan launched a Leash session that needed access to a host credential.
Leash wants to forward ANTHROPIC_API_KEY into the container.
[A]llow once | [P]roject always | [G]lobal always | [D]eny
Four options. Each one a different radius of trust.
Navan stared at the prompt. He'd seen permission dialogs before—every operating system had them, every browser had them, every mobile app had them. Most of them offered two choices: yes or no. Allow or deny. Binary. Crude. The kind of decision architecture that trained users to click "allow" without thinking, because the alternative was "deny" and then nothing worked.
This was different. This had granularity.
"Allow once" meant the credential would be forwarded for this session only. When the container stopped, the permission evaporated. Next time, Leash would ask again.
"Project always" meant the credential would be forwarded automatically whenever Navan launched a Leash session in this project directory. Different project, different decision. The trust was scoped to the work.
"Global always" meant the credential would be forwarded in every Leash session, everywhere, until Navan explicitly revoked it. Maximum convenience. Maximum exposure.
"Deny" meant no. Not now, not ever, not until you change your mind.
Navan pressed P. Project always. This was the Okta twin project. He'd be launching Claude sessions here daily. He didn't want to be prompted every time. But he also didn't want his Anthropic key forwarded into random containers in other projects without his explicit consent.
The decision was recorded in ~/.config/leash/config.toml. Navan checked the file afterward. There it was: a clean TOML block mapping the project path to the credential permission. Human-readable. Human-editable. If he ever wanted to revoke the permission, he could delete the line. No settings menu to navigate, no obscure checkbox to uncheck. Just a text file.
Jay, working across the room on a different project, got the same prompt. He pressed A. Allow once. He wanted to think about it. He was working on something experimental and wasn't sure he'd be coming back to this project tomorrow.
"I like the granularity," Navan said at standup the next morning.
Jay agreed. "It respects the fact that trust isn't binary. Sometimes you trust a specific context. Sometimes you trust a general pattern. Sometimes you just trust this one moment."
Justin listened, nodded. "That's the design principle. Every permission decision should match the scope of the trust it represents. Anything coarser than that is a security liability. Anything finer than that is a usability tax."
Navan wrote it in his notebook. He underlined scope of the trust twice.
The next time Leash asked, he already knew his answer. The prompt appeared and vanished in under a second. The system had learned his preference. The granularity had done its job.
The insight about binary permission dialogs training users to click "allow" without thinking is so true. Four options is the sweet spot. Enough granularity to be meaningful, not so much that it's overwhelming.