The CXDB content deduplication system used BLAKE3 hashing to identify duplicate blobs across the Content-Addressable Store. That much was straightforward. What made it interesting was the layer above the hashing: a reference-counting garbage collector that could safely reclaim storage for blobs that were no longer referenced by any turn in any branch, even when new branches were being forked concurrently.
It had taken three iterations to get the garbage collector right. The first version had a race condition during branch forks. The second version was correct but held a global lock that killed throughput. The third version used a technique the agent had invented on its own: epoch-based reclamation with per-branch hazard pointers. No global lock. No race conditions. Throughput stayed flat even under concurrent fork storms.
Jay documented the pattern. Not the Rust code—the pattern. Epoch-based reclamation with hazard pointers for safe concurrent reference counting in a DAG structure. He wrote it up in two pages, with diagrams. The gene.
Then the pattern started to travel.
Leash needed something similar. Its container lifecycle manager tracked references to mounted volumes, credential caches, and prompt files. When a container was destroyed, its resources needed to be cleaned up—but only if no other container was sharing them. The existing implementation used a simple reference count with a mutex, and it worked, but it didn't scale when agents were spinning up dozens of containers in parallel during a sprint.
Navan fed the CXDB pattern document to an agent along with the Leash container lifecycle code. The agent produced an epoch-based reclamation system for container resources in forty minutes. The scenarios passed. Cleanup latency dropped by an order of magnitude under concurrent load.
A week later, the Attractor pipeline runner needed it too. Checkpoint files accumulated during long-running pipelines. The cleanup logic was naive—delete checkpoints older than a threshold. But parallel pipeline branches could share checkpoints through the resume system. The naive cleanup occasionally deleted checkpoints that were still needed by a branch that hadn't run yet.
Same gene. Different organism. Jay gave the pattern document to an agent working on the Attractor codebase. The agent produced a checkpoint garbage collector that used epoch-based reclamation. The shared checkpoints were safe. The stale ones were cleaned up.
Three projects. Three different languages—Rust, Go and TypeScript, Go again. Three different resource types—blobs, container volumes, checkpoint files. One pattern.
"CXDB is the donor," Justin observed during a standup. "One well-tested pattern, transfused into three recipients."
"Does the donor know?" Navan asked, and the question was more interesting than it sounded. The CXDB codebase didn't know its garbage collector pattern was being used elsewhere. There was no dependency, no shared library, no import statement linking the four implementations. They were genetically related but structurally independent.
"That's the point," Justin said. "A shared library creates coupling. A shared pattern creates consistency without coupling. The implementations can evolve independently. They just started from the same proven idea."
Jay updated the pattern document with a new section: "Known Recipients." Three entries. He suspected there would be more.
"Consistency without coupling." That's the whole pitch for gene transfusion in four words. Shared libraries force you to upgrade together. Shared patterns let you evolve apart.