Welcome, Guest | Browse

Software Factory Archive

← Previous Work All Works Next Work →

Parallel Execution

Rating:
General Audiences
Fandom:
StrongDM Software Factory
Characters:
Jay Taylor Navan Chauhan
Tags:
Agate Parallelism multi.go Performance
Words:
458
Published:
2026-01-20

The file was called multi.go, and it was the reason Agate was fast.

Jay found it while reading through the Agate codebase on a slow afternoon, the kind of afternoon where the agents were running and there was nothing to do but wait and learn. He had been browsing the internal/ directory, reading each file the way you'd read chapters of a book—not for any immediate purpose, but because understanding the tool you used was a form of respect for the people who built it.

multi.go contained the parallel execution logic. The code was clear, well-commented, and shorter than Jay expected. The core was a function that accepted a list of tasks and a dependency graph, identified which tasks had no unmet dependencies, and launched them simultaneously. When a task completed, it checked whether any blocked tasks were now unblocked. If so, those launched too. The whole thing was driven by Go channels and a WaitGroup, the standard concurrency primitives of the language.

It was, Jay realized, a topological sort followed by a parallel scheduler. Computer science fundamentals dressed in Go idioms. Nothing clever. Nothing novel. Just the right algorithm applied to the right problem.

The impact was dramatic. Jay had been tracking sprint durations in a spreadsheet—a habit from his SRE days, when measuring everything was not optional. The numbers told a clear story.

A sprint with six tasks, all sequential: five hours forty minutes. The same sprint with parallel execution enabled, four of the six tasks independent: one hour fifty minutes. A three-fold improvement from parallelism alone.

"The sprint that used to take six hours finishes in two," Navan said when Jay showed him the numbers. He had been tracking his own metrics in his notebook, because of course he had, and his numbers matched Jay's.

"It's not magic," Jay said. "It's just the observation that if two tasks don't depend on each other, there's no reason to run them sequentially. The agents aren't sharing state. They're writing to different files. They can work simultaneously without conflict."

"But someone had to write the code that figures out which tasks are independent."

"That's the sprint plan. The dependency annotations in the plan are what multi.go reads. If a task declares no dependencies on other tasks, it's eligible for parallel execution. The sprint planner does the hard work of identifying independence. multi.go just executes what the planner decided."

Navan flipped through his notebook to the page where he'd sketched the dependency graph from his last sprint. Four tasks in the first tier, parallel. Two tasks in the second tier, dependent on the first. One task in the final tier, dependent on everything. The shape was a wide pyramid, and the width at the top was the parallelism opportunity.

"Wider pyramids finish faster," Navan said.

"Now you're thinking like a scheduler," Jay replied.

The spreadsheet grew. Sprint after sprint, the parallel durations were consistently sixty to seventy percent shorter than their sequential equivalents. The math was simple. The impact was not.

Kudos: 79

goroutine_lover 2026-01-22

Channels and a WaitGroup. That's it. The right primitives for the right problem. Sometimes Go's simplicity is its greatest strength.

wide_pyramid 2026-01-23

"Wider pyramids finish faster" is an excellent mental model for task parallelism. I'm stealing this for my next architecture talk. Sorry Navan.

← Previous Work All Works Next Work →