It started as a debugging tool. Navan needed to test how agents handled real-time collaboration in Google Docs, which meant the Docs twin needed to simulate one party typing while another party read. The API-level approach was straightforward: batch insert operations, entire paragraphs appearing instantly. But that wasn't how humans used Google Docs, and the scenarios needed to model human-like collaboration.
So Navan taught the twin to type.
The typing simulation model was simple in concept. Instead of inserting a complete string at once, the twin inserted characters one at a time with variable delays between them. The delays followed a distribution modeled on observed human typing patterns: an average of 120 milliseconds between characters, with variance. Faster for common letter pairs. Slower at the beginning of words. A brief pause at the end of sentences. Occasional longer pauses that simulated thinking.
"Watch this," Navan said, pulling up a test document in a viewer they'd built for debugging collaboration scenarios. Two cursors were visible. One was Agent A, reading the document. The other was the twin's simulated typist, composing a paragraph.
Characters appeared one by one. The rhythm was uneven in exactly the right way. A burst of fast keystrokes for a short common word. A pause. Then slower, more deliberate typing for a technical term. Another pause, slightly longer. Then a flurry of corrections—a character deleted and retyped, the kind of micro-error that humans make without thinking about it.
"That's unsettling," Jay said.
"It gets better." Navan ran a different scenario. Two simulated typists, both in the same document, both typing simultaneously. Their cursors moved through different sections. When one typist's insertion shifted the document content, the other typist's cursor position adjusted smoothly. The operational transformation engine handled the real-time conflict resolution while the typing simulation handled the character-by-character timing.
"Why simulate typing errors?" Jay asked. "The agents don't make typos."
"But the collaboration partners might. If an agent is reading a document while a simulated human is typing it, the agent needs to handle partial words, mid-sentence edits, deleted paragraphs that reappear moments later. The typing simulation creates a realistic stream of micro-changes that tests the agent's ability to read a document that's being modified in real time."
Justin watched the demonstration from the doorway. The simulated typist was composing a meeting summary. Characters flowed onto the screen with an organic rhythm. Pauses at punctuation. Speed bursts through familiar phrases. A moment of apparent hesitation before a proper noun.
"The pause before the proper noun," Justin said. "Is that modeled, or accidental?"
"Modeled. The typing profile includes latency spikes before capitalized words and words longer than eight characters. It's a statistical artifact of human typing data, but it creates a convincing illusion."
"It's not an illusion," Justin said quietly. "It's a behavioral model. The same as everything else we build." He paused. "It just feels different when the behavior being modeled is human."
Navan saved the typing profile as a configuration file. Eighty-seven parameters defining how a ghost typed. The Docs twin stored it alongside its API behavioral model, just another facet of the simulation.
Eerily human was the goal. The twin had achieved it.
Eighty-seven parameters defining how a ghost types. That sentence is going to haunt me. This is the most atmospheric story in the whole DTU series.