ProbableOdyssey

AI and the Mythical Man-Month: Productivity or Paradox?

The Mythical Man-Month by Fred Brooks is widely regarded as one of the foundational books on software engineering. Among its many insights, one has stood the test of time:

Adding more people to a late software project only makes it later.

Software development is not trivially parallelizable — communication and coordination overheads dominate. Even though the original text was written in 1975, it’s proven remarkably durable over the decades since its publication. This key insight is a common thread across many domains involving long-term projects, and it’s no surprise that this text is also well regarded outside of software engineering. Precisely because complexity scales non-linearly with team size in long-term collaborative efforts.

But what about AI?

I doubt Fred Brooks would have envisioned this reality we now find ourselves in, but I would argue that many of his core principles still hold true today, and perhaps more than they ever did.

There has been no shortage of hype around AI-driven productivity in software. Demos abound showing LLMs rapidly producing code from vague instructions. Agentic workflows promise to go even further, stringing together sequences of actions to build, deploy, and even monitor systems. Many developers claim substantial boosts in productivity.

We’re still in the “wild west” stage of this technology — and there is a shocking amount of snake oil in circulation. It’s challenging weighing up these anecdotes and curated demonstrations and extrapolating their impact across the general domain.

Revisiting “The Mythical Man-Month” helped me clarify my skepticism:

Yes, AI accelerates the code writing process dramatically, but writing code was never the bottleneck — reading the code is. The harder problems live upstream (design) and downstream (integration, maintenance). AI doesn’t solve these, and sometimes it makes them worse:

All of that code still needs to be reviewed, tested, and ultimately approved by someone who takes responsibility. In many workflows, this slows things down. The coordination costs don’t vanish: they shift. In some cases, delivery may actually take longer if not used carefully.

To be clear: the progress made in this space in a very short span has been astonishing — and it shows no signs of slowing down. Don’t forget: this is the worst it will ever be. It’ll either plateau, or it will continue to improve.

There is a strong case for agentic workflows: they let you offload smaller implementations details while you can focus on the harder upstream problems. In practice however, I often find the cognitive friction of context-switching (from deep thinking to debugging hallucinated output) makes it harder to stay focused. Once again, the system stalls at the point of coordination, even if the “other party” is just an AI.

What I’ve found more helpful is using AI to overcome the hurdles that occur when starting something new. The inclusion of a rubber duck that responds with good ideas ironically prompts me to overcome challenges much easier. But importantly I’m still at the wheel, and I can still weave the larger picture throughout all the small details. The consequences of these decisions accumulates, and this can make or break a large codebase if not done carefully.

The rise of multiprocessing gave us powerful new tools, and new ways to shoot ourselves in the foot. Over time, we developed safer paradigms for concurrency, but thinking in parallel is still hard for humans. I suspect agentic workflows will follow a similar path.

What that looks like in practice — and whether it delivers on the promises being made today — remains to be seen.

Reply to this post by email ↪