The question came from a venture capitalist at a dinner that Justin had been persuaded to attend. The VC leaned forward, wine glass in hand, and asked the question that every VC eventually asks: "Can it scale?"
Specifically: "Can the factory scale to a hundred engineers?"
Justin set down his fork. He had a way of setting down utensils that telegraphed he was about to say something the listener might not expect, and the people at the table who knew him recognized the gesture and leaned in.
"The question is wrong," Justin said.
The VC blinked. "Wrong how?"
"You're asking if we can scale the number of humans. That's the Software 1.0 question. More humans, more output. Linear scaling. Maybe sublinear, because of communication overhead—Brooks's Law. You add people to a late project and it gets later. Everybody knows this."
"So you're saying it can't scale?"
"I'm saying the right question is: can the factory scale to a hundred agents with three engineers?"
The table went quiet. Not uncomfortable quiet. Thinking quiet.
"Right now we have three people and dozens of agents," Justin continued. "The agents write code. The agents review code. The agents test code. The agents deploy code. The humans write scenarios and define goals and set policy boundaries. The humans describe the world. The agents build it."
"So you add more agents, not more humans."
"You add more agents and more scenarios. The scenarios are the constraint. Each agent needs to know what good looks like. That's what the satisfaction metric measures. You can spin up a hundred agents tomorrow—the cloud doesn't care. But you need scenarios for them to work against, and scenarios require human judgment. Not human code. Human judgment."
Jay, who was also at the dinner and had been silently eating bread, said: "The bottleneck isn't engineering capacity. The bottleneck is our ability to describe what we want."
The VC sat with this. You could see the mental model shifting behind his eyes—the old model, where scaling meant headcount, giving way to a new model, where scaling meant the number of well-described intentions running in parallel.
"What's the token cost at a hundred agents?" he asked.
"Significant," Justin said. "But tokens are cheaper than engineers. And they're getting cheaper every quarter while engineers are getting more expensive. The economics only go in one direction."
The VC picked up his wine glass. "Dark factory," he said, almost to himself.
"Level 5 manufacturing," Justin confirmed. "The robots work in unlit facilities. The lights are for the humans, and if the humans aren't on the factory floor, you don't need the lights."
The dinner continued. The VC asked four more questions, each one better than the last. Justin answered each one. Jay ate bread.
On the drive home, Justin dictated a note to his phone. The note said: The question is always wrong. The question is always about humans. Reframe. The unit of scale is the agent. The unit of direction is the scenario. The unit of judgment is the human.
Jay silently eating bread while Justin reshapes a VC's entire worldview is the funniest running gag in this archive. The man is just there for the carbs.