Blog

Better Forecasts = Better Business

By

April 5, 2017

Comments are Disabled

Blog, Strategic Direction, Strattomics

 

“Plans are useless but planning is indispensable.”

 Eisenhower[1]

 

Are better forecasts a platform for better business plans for your enterprise for the next year, the next 5 quarters, the next 3 years or possibly the next 5 years? In his “Good judgement project[2], https://www.gjopen.com Philip Tetlock, a Wharton School Professor, determined that the “accuracy of expert predictions declined toward chance five years out” to a baseline for performance that he referred to as “a dart-throwing chimp”.

If your enterprise needs to plan for a future beyond the forecasting horizon, plan for surprise and include actions to improve adaptability and resilience. This is of course easier said than done.  Whilst there will be “Black Swans” [3] from time to time the number of such “unknown – unknowns” can be reduced by rigorously reviewing the overall competitive situation of the organisation. The following short video shows a comprehensive approach to this.

 

However – a “health warning” …….

“Fuzzy thinking can never be proven wrong. And only when we are proven wrong so clearly that we can no longer deny it ourselves will we adjust our mental models of the world – producing a clearer picture of reality. Forecast, measure, revise; it is the surest path to seeing better.”

                                                                                                                        Philip Tetlock[4]

 

The assumptions (e.g. investment return, the cost of a project or time to complete a task) used in any forecast are at best expert estimates of the expected value. Whilst one cannot know with certainty what the actual value of each of these variables will be, effective use of historical data, expertise in the field, or past-experience can improve the estimate. A first step can be to move from focusing on forecasting one number for each assumption to the use of a range. In some cases it is possible to methodologically estimate a range of values. For example, in an Information Systems project or a construction project you could estimate:

  • The time it will take to complete based on expert knowledge
  • The absolute maximum time it might take – the worst case
  • The absolute maximum cost – the worst case

By using a range of possible values instead of a single guess you create a more realistic picture of what might happen in future (in the form of a range). The Delphi technique can be used to improve your estimate of the likelihood and outcome of future events. For example:

  • A group of experts exchange views, and each individually gives estimates and assumptions, for each of the key assumptions in the forecast, to a facilitator who reviews the data and issues a summary report.
  • The group members discuss and review the summary report individually, and give updated forecasts to the facilitator, who again reviews the material and issues a second report.
  • This process continues until all participants reach a consensus.
  • The experts at each round have a full record of what forecasts other experts have made, but they do not know who made which forecast.
  • Anonymity allows the experts to express their opinions freely, encourages openness and avoids admitting errors by revising earlier forecasts.
  • The technique is an iterative process, and first aims to get a wide range of opinions from the group of experts.
  • The aim is to clarify and expand on issues.
 

Another approach is to undertake a Monte Carlo simulation (or probability simulation). This can help better understand the impact of the inherent risk and uncertainty in the forecast. This method is especially useful for:

  • Simulating systems with many coupled degrees of freedom.
  • Modelling where there is significant uncertainty in inputs (e.g. calculation of risk)
  • Determining the average value over the range of resultant outcomes.

A random value is selected for tasks, based on a range of estimates

  • The model is calculated based on this random value
  • The result of the model is recorded
  • The process is repeated – typically hundreds or thousands of times – using different randomly-selected values
  • The results are used to describe the likelihood, or probability, of reaching various results in the model

For example, an Information Systems project of 3 phases is estimated to take 15 months – Phase one (Design) 4 months, phase two (Build) 9 months, Phase 3 (Test)1 month.

  • What does this tell us about risk?
  • How likely is the project to be completed on time?

To create a model to give us insights into these questions we could use a Monte Carlo simulation and create three estimates for each phase of the Task.

  • Randomly generate values for each of the phases
  • Calculate the total time to completion
  • Run the simulation 500 times
  • To test the likelihood of a particular result, the number of times the model returned that result is noted (in this case we want to know how many times the result was less than or equal to a particular number of months)
  • The original estimate for “most likely” (the expected case) was 14 months
  • From the Monte Carlo simulation we can see that total time was 14 months in only 40% of the cases – but there is a 3 in 4 chance that the project will be completed in 16 months or less

 

 

What management actions may we put in place given this insight?

Like any forecasting model, the simulation will only be as good as the estimates you make. Remember that the simulation only represents probabilities and not certainty.

 

Probability judgements should be explicit so we can consider whether they are as accurate as they can be. And if they are nothing but a guess, because that’s the best we can do, we should say so.”

Philip Tetlock [4]

Some years ago, Six Sigma was being used widely in an industry my organisation was serving.  Pipeline accuracy was important for our business as it was the key to managing utilisation and making better pricing decisions (if future chargeability looked week, then allocating more resources to business development would be prudent – as would ensuring staff training was up-to-date and perhaps considering more pricing flexibility. Following a detailed process review we concluded that 30 day and 90 day pipeline accuracy merited a focus and we introduced a Critical to Process Metric to focus on this.  Goodness was accuracy – no incentives to enter low forecasts – so the metric treated over or under achievement against the forecast equally. Each month the forecasts from the previous quarter were evaluated and the pipeline entries that caused inaccuracies investigated. Some individuals (Partners) were consistently optimistic in the time they considered necessary to close new deals. After being shown the data the individuals discussed win probabilities and forecast start dates more broadly on their teams and this appeared to improve the accuracy of the forecasts. I recall discussing with one individual why he always forecast a 60 % win probability 60 days out. He told me that in his experience, keeping the win probability at 60% and close date 60 days out meant that his budget was not challenged and top management focused their attention on other people – those with higher win probabilities and start dates in the next 30 days!.   Behaviour that was not consistent with us optimising future chargeability!

 

Would  a virtual circle of improved probability linked forecasts, informed by operational and strategic insights and feedback from previous forecasts improve your business? If you think that this would improve your business, please contact me. I will be delighted to talk to you.

 

 

 

 

 

 

 

[1] Oxford Essential Quotations, New York, Oxford University Press, 2014

[2] This project collected 27,500 forecasts on politics, geopolitics and economics from approximately 300 experts and analysed their accuracy over 18 years.

[3] Taleb, N.N., “The Black Swan: The Impact of the Highly Improbable”, 2007.

[4] Tetlock, P., Gardner, D., “Superforecasting – The Art and Science of Prediction”, Random House Books, 2015.

 

 

 

 

Comments are closed.