I won’t surprise anyone if I say evaluations are most of the time underestimated. Parkinson’s law ensure estimations will never be overestimated anyway, but how can we better anticipate all the impediments that will delay our project ?
This article is taken from the “How (not) to refactor a kernel” conference I did during the Agile Tour 2021 in Bordeaux.
Problems occurs at the end
The devil is in the details, and details happen at the end. For a lot of project, startup goes well. You put in place architecture, you handle generic case, post-pone all the small features that bother you and are confident you will meet the deadline. But approaching 80% of project scope, you start to slow down, corner cases don’t fit in your design, more and more cases were not anticipate and in the end, you end up with another 80% of work to address and your burn up curve look like this:
Modeling the curve
“The more there is to do, the easier it is to pick stuff to do”
“When nothing works, small changes have big impact”
“The last use cases are the hardest to address”
I think I’m not the only one to already have heard this regarding project advancement. Lets use these statements as starting point for our model. Note that I cannot prove they are true. I suppose they are true and I will extrapolate what they imply. You are free to disagree with them and thus, disagree with the the conclusion I’m going to draw.
Assume we have a finite and constant quantity of work to address in order to fulfill our project. This can be a number of use case, a number of bugs, a performance level to reach, etc… As we stated before, the greater is this remaining quantity, the more effective the team will be at addressing it. This can be formalized by this formula:
For a small period of time (let say, a Sprint), the work done by the team depends on the remaining work, using a constant coefficient “effort” which characterizes the team. We assume the team is constant and stay focus on its project, so the effort coefficient is constant. Now, lets do some math and write this equation slightly differently with infinitesimal time:
You may recognize a 1st order differential equation which is solved this way:
With k and τ constant. Their values is not meaningful for our demonstration. If we plot this equation, we end up with this curve:
This curve really look like the project burn up curve we had at the beginning, so our model may not be that silly after all.
1st order equation have some particular characteristics easily visible graphically. First of all, they never reach their limit. This is annoying to estimate the end of our project… In practice, we consider this kind of system reach its limit when it reach 95% of its target. And we know that these systems take the same time to go from 0 to 80% than to go from 80% to 95%. This means that, once you had implemented 80% of feature, you are just in the middle of your project:
Despite being a very good way to kill the mood of your team, this information is not very useful to build a roadmap because you have to wait the middle of your project to know when it will be finished. But this curve has another much more interesting particularity: Its initial slope tells us when it will reach the target which is actually 3 times latter than you think:
As any human being, as soon as you start having a curve you hasten to draw a straight line to extrapolate its evolution. This extrapolation gives you an estimated end date that you happily communicate to your direction and stakeholders. But in this case, you know your end date will be 3 time this estimation. This can be summarized by:
It will take tree time your initial velocity to complete the project
Is 3 a magic number ?
I’ve heard some project manager stating that estimation should always be multiplied by 3, or even Pi, using more or less serious justifications like “Three-point estimating technique” or the infamous “Circular Estimation Conjecture“. But my favorite argument for Pi is that, as an estimation is always irrational, multiply it by Pi will ensure making it irrational anyway.