The email arrived at 3:47 AM Pacific time, which meant it was 12:47 PM in Berlin. The subject line was "We built one" and it was addressed to the general contact address listed on factory.strongdm.ai.
The body was three paragraphs. The first described a four-person startup called Lautlos GmbH that built compliance tooling for European financial institutions. The second described how they had read the NLSpec documentation, cloned the Attractor repository, installed Leash and Agate, and in fourteen days had a functioning software factory producing validated code against their own digital twins of three banking APIs. The third paragraph asked, simply, if they were doing it right.
Justin saw the email at 3:51 AM because he was already awake, reading a paper about attractor basins in coupled oscillator networks. He read the email once. He read it again. Then he replied.
The timestamp on his reply was 3:51 AM. Four minutes after receipt, if you were being generous. More like three minutes and change.
His reply was six paragraphs. The first congratulated them. The second asked three specific questions about their twin fidelity—how they validated behavioral equivalence, what their scenario coverage looked like, whether they'd encountered the cold-start problem with satisfaction metrics on fresh twins. The third paragraph suggested they look at branch-from-any-turn in CXDB for their agent context management instead of the flat-file approach they'd described. The fourth pointed them to a specific section of the coding-agent-loop spec that addressed the exact convergence issue they'd mentioned in passing. The fifth recommended they increase their token spend. The sixth said he was glad the documentation had been sufficient.
In Berlin, Maren Vogt read the reply and showed it to her team. "He answered in four minutes," she said. "At four in the morning."
"Maybe it's automated," said Erik, their lead architect.
"It's not automated. He asked about our twin fidelity testing. He noticed we mentioned the convergence issue. He pointed us to a specific section of a specific document." Maren reread the email. "He's been waiting for someone to do this."
They exchanged eleven more emails over the next two weeks. Justin's replies were always fast, always specific, always structured as clear recommendations rather than mandates. When Lautlos hit their first satisfaction plateau—stuck at 0.73 across their primary scenario suite—Justin suggested they weren't testing enough failure modes. He was right. When they added adversarial scenarios, satisfaction dropped to 0.61 and then climbed to 0.84 within a week.
"The drop is the signal," Justin wrote. "If your satisfaction doesn't drop when you add harder scenarios, your scenarios aren't hard enough."
By the end of November, Lautlos had deployed factory-generated code to their staging environment. Three of their banking-API clients noticed the improvement in response time. None of them knew that no human had written the code.
Maren sent one more email. "Thank you for making this open source."
Justin replied in two minutes. "That was the point."
Four-minute email reply at 3 AM. This is the most believable detail in any of these stories.