Navan drew it in his notebook first. Pencil on graph paper, because some ideas needed to exist on paper before they could exist anywhere else. Peaks and valleys. Ridges and basins. A topographical map of something that wasn't terrain.
"Satisfaction is a landscape," he told Jay, turning the notebook so they could both see it. "Every possible implementation of a scenario maps to a point on this surface. The elevation is the satisfaction score. The agents are hill-climbing."
Jay studied the drawing. Navan had a clean, precise style. The peaks were labeled with satisfaction percentages. The valleys were labeled with failure modes.
"And these?" Jay pointed to a cluster of medium-height peaks in one corner.
"Local maxima. The agent climbs to the nearest peak and stops. Satisfaction is high—ninety-two, ninety-three percent. But there's a higher peak over here." Navan tapped a single tall peak on the other side of the landscape. "Ninety-nine percent. The global maximum. To get there, the agent would have to go down first. Through a valley. Past a worse implementation."
"And agents don't do that naturally."
"Hill-climbing algorithms never do. They follow the gradient upward. They can't see the whole landscape. They see the local slope."
This was the problem the team had been circling for weeks. Most of the time, the agents found excellent solutions. High satisfaction. Clean code. But occasionally an agent would get stuck on a local maximum—a solution that was good but not great, and couldn't be improved by small changes because every small change made it worse.
Justin's solution was characteristically structural. "Multiple agents," he said when they brought the notebook to him. "Different starting points. If one agent finds a local maximum, another agent starting from a different initial implementation might find the global one. Agate already supports this—parallel sprints with different approaches, then evaluate which one converges to the highest satisfaction."
"Simulated annealing," Navan said. "In metallurgy, you heat the metal so the atoms can escape local energy minima, then cool it slowly so they settle into the global minimum."
"Same principle," Justin agreed. "Randomness lets you escape local optima. Multiple agents with different initializations is our version of raising the temperature."
Jay looked at Navan's drawing again. The landscape was beautiful in a mathematical way. All those peaks and valleys, all those possible implementations, and the agents were tiny points moving uphill through the topology, seeking the highest ground.
"What about the valleys?" Jay asked. "The worst implementations. What lives down there?"
Navan smiled. "Race conditions. Off-by-one errors. Unhandled null pointers. The classics. The satisfaction landscape is a map of everything that can go wrong, and the peaks are the places where nothing does."
He tore the page out of his notebook and pinned it to the wall above his desk. It stayed there for months. Someone eventually took a photo of it and it circulated on internal Slack with the caption: the map of everything.
It wasn't the map of everything. But it was close enough to be useful, which is all a map needs to be.
The simulated annealing connection is genuinely insightful. Multiple agents with different starting points IS random restarts. This is literally how we solve non-convex optimization problems in ML. Same math, different domain.