Steve McConnell on Software Estimation

By: Ryan Kienstra on: October 24, 2017  in: Programming

It is difficult to make predictions, especially about the future –unknown source

telescope icon

Predictions are hard. But I think the real goal of software estimation is to be as accurate as possible. And then convey any uncertainty in the estimate.

Steve McConnell’s Software Estimation: Demystifying the Black Art gives some great ways to do this.

Some of his best techniques are estimating in ranges and by task.

Estimating In Ranges

Every task in the project gets an estimate for:

  • Worst case
  • Best case
  • Most likely case

There might be wide ranges at the beginning of the project. McConnell suggested that if you feel pressure to give a more narrow range, make sure the pressure isn’t self-imposed.

The cost of underestimating a project is often much higher than overestimating. Though agencies can lose projects from high estimates.

The fact that the range is wide can show a client that there are a lot of unknowns. Mabye more definition would narrow the estimate.

Worst Case

McConnell also mentioned that thinking about the worst-case often increases the estimate for the best case. It can expose complexities.

He asks developers to imagine:

how long the task would take if everything went wrong

The worst-case can be even more complex if there are 3rd parties. For example, if an API isn’t available.

Percentage Confidence?

A different option is to give a single estimate, with a percentage confidence. I like giving a range instead. It’s harder to accidentally simplify.

For example, “12 weeks, with 75% probability” might be inadvertently simplified to “12 weeks.” Especially because clients might be used to getting single-unit estimates.

But 10-14 weeks is harder to simplify. McConnell gave a similar reason for estimating in large units, like “delivery in the second quarter” instead of “6 months, +3 months, -1 month.” It’s hard to simplify “second quarter” without changing it.

Estimating By Task

Break the project into tasks that take no longer than 2 days. If any are longer, break them into sub-tasks. Some might be 1/2 or 1/4 day.

We estimate by task at XWP, and it generates conversation on how complex each task could be. Also, it clarifies the requirements.

Shows Complexity

As McConnell mentioned, breaking an estimate into small tasks exposes extra work. That’s the case sometimes in our estimates at XWP.

It can highlight a certain feature that would be a necessary part of another feature.

McConnell also suggested estimating by specific tasks in a Work Breakdown Structure. This shows some activities that might not appear in an estimate.

These include meetings, managing change, and documentation.

Also, the people who will carry out the task should estimate it.

Task-based estimation averages out errors, through the Law Of Large Numbers. Individual estimates can be off by a large percentage, but the final estimate can be much closer.

Accurate And Transparent

Estimating by task helps to give a more accurate estimate, and allows inaccuracies to average out. And giving ranges shows any uncertainty in the estimate.

So instead of just a prediction, it’s a transparent view of the project.

  • This field is for validation purposes and should be left unchanged.

Leave a comment

Get Free Updates

  • This field is for validation purposes and should be left unchanged.