ProbableOdyssey

Start simple, iterate to perfection

Starting with a “dumb” baseline in data science feels embarrassing. It looks like you’re not trying hard enough, or that you don’t know the fancy methods everyone else is using.

But when building out a new system (outside of creative side-projects or learning exercises), one of the worst things you can do is jump straight to complexity

When building something reliable, quality matters much more than quantity, and jumping the gun often leads to missed oportuinities and simpler solutions. The cost of this can manifest in tech debt accumulation, or literal compute costs due to poor optimization.

Instead: start simple. A rough baseline exposes the shape of the problem faster than any polished model: what the data actually behaves like, which metrics move, and where the real bottlenecks live. You don’t discover these truths by crafting the perfect system in isolation.

The point of the first iteration is orientation. It helps you frame your thinking optmilally for a probelm and establish how you’ll measure progress as you iterate towards perfection.

Starting simple doesn’t say anything about your competence. It says you’re willing to face reality early instead of hiding behind perfectionism. The people who avoid simple baselines aren’t being rigorous — they’re being afraid.

If you want high standards, define them with evidence. A quick baseline and a real metric are the fastest way to learn what “good” even is.

Reply to this post by email ↪