Jay discovered it by accident, which was how most of the important things in the factory were discovered—not through planning but through the idle curiosity of someone who couldn't stop pulling threads.
He was looking at a pipeline that had four independent codergen tasks: generate the API layer, generate the database schema, generate the CLI interface, generate the test harness. They were defined as four separate nodes, each with an edge from the same parent: the architecture node. Four edges leaving one node. Four tasks with no dependencies on each other.
"These can run at the same time," Jay said aloud, to no one in particular.
He checked the Attractor spec. Fan-out was already there. If multiple edges left a node and their conditions were all met simultaneously, the runner would execute the downstream nodes in parallel. It wasn't a feature Jay needed to request. It was a consequence of the graph topology. Multiple edges, no ordering constraint, concurrent execution.
He ran the pipeline. The token dashboard lit up. Four codergen calls fired simultaneously, four streams of tokens flowing in parallel, the burn rate quadrupling in an instant. The dashboard's real-time line spiked upward like a heart rate monitor during a sprint.
"What happened?" Navan looked up from his own work, eyes drawn to the token dashboard the way eyes are drawn to sudden motion.
"Parallel fan-out. Four agents working at once. Watch the satisfaction metric."
They watched. The four tasks completed within seconds of each other. The fan-in node collected their outputs, combined them, and passed the unified result to the assessment node. The satisfaction score came back: 0.91. On the first pass.
"That's the highest first-pass score we've ever seen," Navan said.
"Because the tasks are independent," Jay explained. "When they run sequentially, each one inherits the previous one's context, which sometimes pollutes the scope. Running them in parallel means each agent gets a clean context. No cross-contamination."
Justin appeared in the doorway. He'd seen the token spike on his own dashboard. "You found fan-out."
"It found me. I just drew the graph correctly and the runner did the rest."
"That's the idea. The runner doesn't need to be told to parallelize. It sees independent paths and takes them simultaneously. The intelligence is in the graph, not in the runner."
By the next morning, Jay had refactored three more pipelines to use parallel fan-out where tasks were independent. The factory's overall satisfaction metric jumped from 0.84 to 0.89 overnight. The token burn rate doubled. Justin's reaction was a single Slack message: Good. Spend more.
Jay smiled. In the old world, spending more was a warning sign. In the factory, it was the sound of the machine working harder than you ever could.
"Good. Spend more." Justin's one-line Slack responses are becoming legendary in this archive. The man communicates like a well-written API: minimal payload, maximum information.