The art of estimation

Andrei Dascalu
5 min readJan 1, 2023
Photo by Alexander Schimmeck on Unsplash

After reading this bit I got into a state of disbelief at the fact that people tied to the software industry still get the idea of estimation wrong. It’s clear from the start that the author doesn’t know the goal of estimation. Actually, estimation generally answers 2 questions: when will it be done? and/or how long will it take?

These are questions that people “in charge” want answered upfront or some reasonable time along the way of development. However, in the mentioned article the author equates this to having a bill — which is done after the fact. Worse, it refers to repair shops, which don’t estimate but look at the list of parts and the standard time required to do some fixed operation on them. Nothing customisable there, no adaptation along the way, etc.

For a TLDR — there are ways to provide management with answers (see https://www.youtube.com/watch?v=ESOaDiv3lXA, https://www.youtube.com/watch?v=QVBlnCTu9Ms, https://www.youtube.com/watch?v=v21jg8wb1eU&t=489s, https://www.youtube.com/watch?v=NEJUkvWGuvw). There’s a difference between what clients/managers ask (aka “think they need”) and what they really need, particularly in a fast-changing field like software development — and this “really need” may differ depending on project.

Now, for a more detailed story, across 21 years of development I was involved in exactly one big (enterprise) project that kindof went by estimates in a way that could be considered “accurate enough”. It was one of my very first projects in the field (well, “mine” = “I was a junior developer in it”). We had clear requirements for it (the customer and a team of PMs on our company’s side spent months on refining them — a PoC was involved as well, before my time). For every change, we would itemise the work need (much like the article that inspired me suggests). Since time was spent upfront nailing things down as well as itemising details, what came out was largely accurate in terms of deadlines. While far from perfect, deadlines were met however I can’t recall an instance when touch decisions were made based on those estimates — I got the feeling (unconfirmed) that since a lot of time (often more than development) was sunk into detailing work, nobody had the stomach to cut things down even when it was clear they may not work as intended.

This was just when the recovery from the dotcom bubble crash of late 90s — 2000 was taking hold. Software as an industry was quite far from the fast-paced thing it is today. Agile wasn’t quite known as “Agile” though its concepts were permeating steadily.

What we learned (estimation-wise):

  • if you spend enough time nailing things down, you will get to an acceptable level of detail (but you need a clear definition on what “acceptable” is, otherwise you won’t know when to stop)
  • customers have to be willing to pay for that time
  • customers must agree to ways of controlling change (aka: assessing the implications of needed work that’s discovered along the way) — this worked for us mostly because it was the customer doing that on their own, they showed that to us

What we learned since:

  • today, nobody wants to pay for “project management time”, many simply don’t want to spend the time (even if it’s “on the house”) because time spent not coding means the product (even in alpha) is not being assessed by real customers.
  • today, nobody wants to control change — they just want it done at no extra cost
  • all of the above amounts to waste. Some waste is inevitable, other is acceptable but its impact nowadays (as opposed to AD 2001) can be lethal to software companies.

Things today aren’t what they used to be. In early 2000s you could still sit on your software project and refine it, while today you’d be developing a product alongside competing products (or trying to get ahead of existing ones). There’s simply no time (or any other resource) to waste.

Why I’m on the “no-estimates” camp today:

  • “how long X takes” helps to asses costs. However, it won’t say when X is done. X may take 5 days, but work may stop and start depending on other dependencies, sick time, whatnot. At the end the time spent on the task may be 5 days but the task was done after 12 working days. Was that useful?
  • “when X is ready” is a no-go. You start X and discover other stuff that need to be done to enable work on X, or maybe that X can’t be done. You could have answered the same questions by doing R&D upfront, but why not try development right off the bat? After R&D you’d be left with work to be discarded (a step back if X is feasible and should go ahead — waste) while be in a position not different should the work be discarded. However, in either case the estimate is wrong. If you’re worried about costs or strategic delivery decisions might as well timebox X to an acceptable amount of time — if it’s not done then at least you can shelf the work and use it for a head start later on.
  • Agile (as in the Scrum variety) is largely a distraction that’s probably costing more than what you’re getting out of it (https://www.youtube.com/watch?v=WFbvJ0dVlHk). Been there in a lot of different scenarios, once people see what Scrum is about nobody really wants it as-is and the first thing to do is to remove anything remotely related to the “agile manifesto” from it (see the horror called SAFe).
  • forced estimations enable a blame game, which is only important in enterprises focused on “ass-covering” tactics rather than delivery of working software.
  • forced estimations enable monitoring of people — which is important in cultures based on lack of trust. You’re setting a timeline for yourself so that manages can monitor your progress along that timeline. It’s a setup for failure — even when presented under the guise of transparency.

What to do probably deserves an article of itself:

  • items should be considered by value. Most value should come first (along with their dependencies). Delivery strategies should adapt according to testing results — but ideally that should be combined with less strict requirements that allow quick adjustments based on feedback.
  • real feedback is production feedback — things should go to production ASAP if they don’t break anything instead of being tested for acceptance away from production.
  • populate your team with people you trust to do what’s needed for the project. If you can’t trust people and feel the need to check on them, better not have them on your team.

There’s probably more to be said but others have said it better than me so better go to the likes of Jez Humble, Dave Thomas, Kent Beck or Allen Holub.

--

--