Jay built the dashboard on a Saturday. He'd been thinking about it all week, ever since the 3 AM anomaly first showed up in the data, and he couldn't let it go. He was an SRE at heart, and SREs don't leave anomalies uninvestigated.
The dashboard was simple. Horizontal axis: hour of day, Pacific time. Vertical axis: three metrics stacked—scenarios completed, average satisfaction score, convergence rate. Twelve weeks of data, smoothed. The pattern was undeniable.
Between 3 and 5 AM, everything peaked. Scenario throughput up 40 percent. Satisfaction scores up 1.3 points. Convergence rate—the fraction of agent runs that reached satisfactory state within the allocated time—up 12 percent. The factory's best work happened when no one was watching.
Jay ran the statistics properly. He was careful about it. Confidence intervals. Multiple comparisons correction. He even controlled for scenario difficulty, because maybe easier scenarios were getting scheduled at night by coincidence. They weren't. The effect held across all difficulty tiers.
He wrote it up. Clean analysis. Clear methodology. Reproducible results. The kind of finding you put in a blog post or a conference paper. He sent it to Justin with the subject line: Publishable finding re: agent temporal performance variation.
Justin's response came in twelve minutes: Come to the office. Don't share this with anyone else.
They met on Monday morning. Justin had printed Jay's analysis, which was unusual—Justin rarely printed things. The pages were annotated in pencil. Justin had read it carefully.
"This is good work," Justin said. "This is statistically sound. This is also something we are never going to publish."
Jay had expected resistance. He had not expected a flat no.
"If we publish, every team running agentic workloads shifts to the same window. The LLM providers see the load spike. They rebalance. The advantage disappears. We've published away our edge."
"Or," Jay countered, "publishing it advances the field. Other people build on the finding. We get credit for the discovery."
"We get credit and lose the advantage. The factory doesn't need credit. The factory needs output."
Jay wanted to argue. The scientist in him, the part that had spent years on Hacker News discussing open research and the value of shared knowledge, wanted to push back. But the engineer in him understood. An advantage you publish is an advantage you no longer have.
"So what do we do with it?" Jay asked.
"We use it. Schedule the hardest scenarios for the dark hours. The complex multi-service orchestrations. The edge cases that push satisfaction the lowest. Let the agents do their best work when they do their best work."
Jay restructured the scheduling that afternoon. The hardest scenarios shifted to the 3-5 AM window. Nobody outside the team would ever know why.
The factory ran twenty-four hours a day, seven days a week. But it had a sweet spot. A quiet interval where the code was cleanest, the convergence was fastest, and the satisfaction climbed highest. The dark hours. The best hours.
Jay unpublished his analysis. It lived in a private doc. The factory kept its edge.
The tension between open science and competitive advantage is real and this story captures it perfectly. Jay the researcher wants to share. Justin the operator wants to win. Both are right. Neither can have what they want.