Tuesday, December 18, 2007

Accurate Estimation

Accurate estimation should be a required college course for everyone. Estimates are a daily part of life at any corporation. At the highest levels, managing a company is an exercise in anticipating the best avenues for growth and applying the appropriate resources to that problem. Doing that properly requires estimates. The more precise the estimates are, the better the decisions will be.

So how do you properly estimate? The first thing to do is recognize estimates will always have an error margin. The challenge is not just to get a first-order estimate, but to understand what drives the error.

In my experience, there are three primary ways to derive a first-order estimate. The first is to do a bottoms-up analysis of all available data and make predictions on each piece. The second is to build a model based on key variables which you can measure and get prior data. The third is to use gut instinct.

Very often the first two avenues prove either so much effort or so imprecise that gut instinct is applied instead.

Another form of “gut instinct” that has proven to have some value is in prediction markets where the average of a large sample size of individual votes is used to estimate. This has the advantage of potentially illuminating bimodal distributions that suggest there is a key binary driver in the prediction model.

In my experience, the most effective estimates are derived from combining the first three. One person builds a model or runs data through an existing prediction model. Another person builds a bottoms-up analysis of the prediction. A third person has their own gut instinct and does a simple validation of that with other experienced individuals.

The rationalization process is the key to success in this approach. If one of the three methods is out-of-whack with the others (or worse, none are even close) the next step is to rationalize the estimates. If the model is predicting lower, why did the bottoms-up predict higher, or why did gut instinct predict higher? If you can resolve the discrepancies you often illuminate the key drivers that are unique to the particular value you’re predicting. That learning can then be built back into the model and build into the bottoms-up-analysis steps in the future.

Now the challenge is to model the potential errors. Potential errors are stated as risks and assumptions. All predictions make certain assumptions and all have certain risks. If you can quantify the potential impact of a missed assumption or a risk and assign a likelihood value, you can build a confidence distribution with a Monte Carlo simulation. This is especially helpful if the models, analyses, or gut instinct are biased toward sunny-day scenarios.

The final, and often overlooked, step in the estimates process is to track your predictions vs your actual results. Measuring a large number of predictions and results allows you to model what your known error rate is and ascertain whether your processes are really improving.

If you can improve your estimates, you will execute more cleanly, deliver more consistently, operate more efficiently, and invest more wisely.

No comments: