Thursday, January 19, 2017

Why Are We So Awful at Estimating Large Software Projects?

Recently I was talking with some guys over at Xamarin about estimating projects. This got me thinking again about why we are so bad at it. Over the years there has been a continued recognition in the software industry that we are awful at estimating large software projects. There have been a lot of people who have written on this subject and at this point I feel I'm crusty enough where I can write on it too.

We've tried to work our way around our poor estimating skills with Agile models were there is no up front estimating of the project and that's great work if you can get it. But the reality is that in many organizations there is a CFO or another person in control of the budget who is making the decision to allow a project to be funded. He wants to know, "How much is it going to cost?" and he wants to know that before the first line of code is ever written. So we need that upfront waterfall estimate for them, even if we know how difficult such an estimate is likely to be.

When doing estimation of large software projects, for years I've seen a diagram floating around similar to this:

*

* I'm sorry about my poor drawing skills. This was done on an iPad Pro with a Pencil. The iPad Pro is outstanding (thanks Greg!!!!), my drawing skills are not.

This diagram is of something referred to as the cone of uncertainty. The story goes that when you start a project you know very little about it and thus you don't know its cost. You have a high degree of uncertainty and may think it is more or less difficult than it really is. Over time as you execute on the project you know more about it and the amount of uncertainty about the project and its cost shrinks until the project is done. When it is done you finally know exactly how much it did cost.

At the start of the project you know nothing or very little. This is when people may make some estimates which are shown by the red dots. Some of the estimates will be too high because the complexity and cost was over estimated and some will be too low because the complexity and cost was underestimated. A nice even distribution.

This is a myth we like to tell ourselves. Here is the reality:


What a difference. Most of the estimates are low, way low. A few may be close. These are likely the estimates that when you go out to bid are way higher than the others. Very rarely do any of them match the actual cost at the end.

I'm sure you are thinking this is terrible. Why does it happen? I believe there are several factors that go into it.


  • We have a tendency to think things we don't know well are simpler than they are
When writing custom software we are doing something that has never been done before. When we think about something we have never done before, we tend to think it is simpler than it is because we don't understand the complexities involved. We don't know what we don't know.

Recently I was working with a colleague and we were asked how much does it cost to add internationalization to an application. My colleague thought it was a reasonably simple task and listed off how the Android system handles string resources. I had just completed a project with internationalization and I replied that it could potentially be a large effort. It was a cross platform project so we needed a way to have cross platform resources, we would have to see if we have to do any currency conversion and deal with what exchange rates to use, perhaps we might have unit of measure conversions like pounds to KGs or miles to kilometers, we might have graphics that need to be different per culture, we may even need to tweak the layout to handle making space for languages like German with long words. The amount of complexity in there could be very large but we don't know all the pieces that could be in there until we have done it before.

Since the software we write is custom by definition, some of what we are doing is unique; things we have never done before. As such, we will tend to underestimate the effort.
  • Many times it is some of our best people doing the estimates
Another problem run into when creating estimates is that in many cases it is the senior people who do the estimates on large projects. After all, they are our architects and most experienced people. This is great, but they also tend to be the people who work the fastest and may estimate based on how long it would take them to do the task. The person doing the work may not have the same experience or velocity.
  • We only think of the primary activity and tend to gloss over all the things around it
When thinking about a feature we tend to think in terms of what it takes to actually code it. What is in many cases forgotten are all the activities around it. Creating a feature branch, checking in and out code, merging code, creating pull requests, responding to code reviews and even making tests (if not estimated elsewhere). Additionally user experience design and quality assurance iterations tend to be much larger than people think about. All of these things add up to real time which brings us too:
  • Things that seem small and inconsequential add up
Some of the activities I mentioned can sometimes be discounted. After all, it can be argued that it takes almost no time at all to create a feature branch. The problem with that thinking is that it does take time. A couple of minutes here, a couple of minutes there equates to large buckets of time. When they are not included, our estimate is necessarily too low. We don't tend to add things that aren't there but we almost always discount things that should be, so the estimate is low.
  • People are not machines but many times we estimate as though they are
Sometimes we will estimate thinking their people are going to work eight hours a day, forty hours a week on a project. This is not reality. People need time for bio brakes, admin functions and sprint ceremonies and just time to think. You should probably be thinking more like six hours a day. 
  • Lower estimates make people happy and we like to please people
This bit is one of the more insidious reasons why estimates are low. Many of us are hard wired to try and please people and this includes the people who are making the estimates. They know that they will please more people when the estimates are lower. The business more likely to be won, the project more likely to be funded.

I have a saying, unlike wine, bad news does not improve with age. I fully believe that, but it is not something we take to heart in estimating. In most cases it isn't purposeful, it is just an unconscious desire to please leading to bias for lower estimates. We give the lower estimates now because it is good news. But what I said still applies, bad news does not improve with age and clients don't tend to like finding out a project was underestimated half way through.
  • We tend to estimate the happy path
Lots of things come up when developing. Sometimes we encounter bugs with our tools, sometimes our machines break, sometimes we have to way to get access to some system or an external resource will not be available through no fault of our own. We don't normally think about these things when estimating. An underlying assumption on many of our estimates is that our machines will always work, the services will always be up and well will get immediate access to any resource we need access to. 
  • There are many ways to implement a feature
Recently we did a presentation with a prospect on why we couldn't estimate their project based on the information they had given us to far. We went over several of their features and mentioned three ways that each could be implemented with a very different level of effort for each. 

People who go out to bid in many cases think their RFP and specifications are much better defined than they actually are. Since we tend to like to please people and know lower estimates will do so, our estimates tend to be for the simpler implementations. The problem with this is there wasn't a meeting of the minds between the prospect and the person doing the estimation. They estimated a Yugo when the prospect may be envisioning a Cadillac.
  • We only think we know what the software should do
One of the reasons to go with a minimum viable product in mobile is because only through experimentation do we find out the optimum formula to achieve business outcomes and that formula will have to be tweaked through the lifetime of the application.

I was recently with a prospect and we let them know it may be several iterations before they really get their application to where it is achieving business outcomes. Incredulously they asked us, "Do you mean to say you can't write it right the first time?" to which we replied, "Unfortunately you don't know what needs to be written yet."

This is the reality of a software product. No matter how well thought out a set of waterfall up front specifications are, they will not survive contact with the implementation intact, they will be changed. This will cause reimplementation work that is almost certainly not part of the estimate.
  • We go out for competitive bids
Another problem we run into is when we go out for competitive bids. We give our potential vendors ample incentive to try and come up with best case scenarios, after all they want to get the business. They will play with the model, different ways thinks could be implemented, etc. These changes are not necessarily being made to increase our business outcomes but instead to win the work.

Of course the problem is that much of this work is time and materials. The laws of physics are not being bent with these bids. The effort in the end will be what it will be. But we give our vendors ample incentive to "get creative" in the estimating process. Which brings me to
  • Sales people like to sell
Sales people like to sell. In all consulting companies there is a desire to get the work. In most cases they are not purposely trying to do anything to undermine their clients, but they do want the work. That leads to a situation where tremendous pressure can be put on the people making the estimates to keep them low by the sales staff.

This problem is very pronounced in sales driven organizations where the sales teams can be very influential. After the business is won the delivery teams have to pick up the pieces and try to deliver on unrealistic estimates. If you are bringing in an outside consulting company be very careful to know if that partner is more delivery focused or sales focused. The estimates of a delivery focused organization are likely to be more realistic but likely higher.


I don't Just Want to Complain About How We are Bad at Estimating

If you are trying to estimate a software initiative, particularly if you are trying to bring in a partner, there are things you can do to reduce the impact of all the bad estimating.

  • Understand the bids are all too low, plan accordingly
If you went out to bid based on some specifications you came up with, know that you are at the wide opening of the cone of uncertainly and all the bids are likely clustered at the bottom. Whatever the successful partner's bid is, pad it. With that extra padding the project won't end up as over budget as it otherwise would have been. 
  • Have workshops with potential partners
While the estimates will likely be clustered around the low end of the cone of uncertainty, a way to get them as accurate as possible is to move further along the cone. The more the potential partners know, the more complexity they will understand and the more accurate their estimates are likely to be.

This has the unfortunate side effect of also likely making the estimates higher but by doing these types of workshops you can usually get a feeling for working with the different potential partners and which are better than others. We do such a workshop quite frequently, called a backlog grooming workshop. Our purpose is to move further along the cone of uncertainty and understand more about what the software needs to do while getting a feel for the prospect while they get a feel for us.
  • Examine why bids are different
If you get in several bids, don't be tempted to just take the lower one. If one is much different than the others, try to understand why that is. It may not be an apples to apples comparison, they may be thinking about something the other potential partners are not or the delivery model could be substantially different. 

We do this with planning poker as well. If someone's answer is very different than everyone else's, it pays to find out why. They could very well be thinking of something crucial to why your project may cost a different amount than the other potential partners are thinking.

  • Understand if you are dealing with a sales focused or delivery focused organization

The bids of sales focused organizations may be lower. They also may have cut all kinds of corners in the delivery plan to have that lowest bid. Just be sure you understand what you are getting. Sometimes cheaper is more expensive in the long run. 

In delivery focused organizations the delivery teams tend to have more input in the estimation process are are always asking themselves if they can implement it, because they have to. The delivery plans of such organizations tend to be better thought out and as such may have less cost overruns due to problems with the delivery model itself.
  • Think in terms of product instead of project
Something that we think a lot about with mobile software is it a project or a product. A project has a known scope and when it is done it's more or less done. (see this post by Russ Miller: Mobile Project or Mobile Product). Projects normally have fixed up front budgets, timetables and scope. Since we know the budget is likely too low, projects usually result in cost overruns. 

Products are normally done on a capacity model with a burn rate over a period of time with a variable amount of scope. With a product approach you may not fully know the amount of functionality that will be achieved but with a fixed burn rate, you do know the cost. If there is some flexibility in the scope for an MVP release or the date, a product approach with a capacity team can give you much more accurate period over period cost forecasting and to some extent sidesteps the whole problem with underestimating.
  • Estimate small things
A co-worker has opined and a small 1-2 story point estimate is likely to be 90% accurate while a 40 point epic is likely to only be 50% accurate. The more you know about something and can break it down into smaller chunks, the more accurate the estimate is likely to be. Similarly, if you break your software releases into smaller chunks where a MVP really is the minimum, then the more accurate the estimates for them will be as well.
  • Semper Gumby
At the end of the day be a little flexible with your expectations. We know people are poor estimators and we know some of the real reasons that go into it. Go in knowing that the estimates are too low but also with an understanding of what level of budgetary tolerance you can be off before business objectives are really being impacted. If you can, be somewhat flexible on scope, particularly for the first release. Complex systems take time and iterations to build correctly. 

Understand that the estimates are too low, what functionality is really critical to business success and which is not (if you think it all is, you are likely mistaken), know what tolerance you have for cost overruns and just be as flexible as you can within the constraints of meeting your business goals.

2 comments: