Forecasting is an invaluable process for any business. A forecast can play a significant role in driving company success or failure. For example, high forecast accuracy helps a business anticipate changes in the market, identify growth opportunities, reduce risks, analyze  root causes of performance and proactively respond.

On the other hand, forecasts that are poorly designed, based on weak assumptions often result in unintended consequences.

Preparing highly accurate and reliable forecasts to support decision making is one of the major challenges faced by performance management teams across sectors and industries.

Traditionally, business performance forecasters have relied on past performance to predict future performance. In a perfect, static world the formula works well. However, as we all know the world is not static. The only thing that is constant is change.

Volatility, uncertainty, complexity and ambiguity are at an increasingly alarming level. Further, new technologies are transforming how we do our work now and in the future.

A number of manual processes have successfully been automated. Where businesses have previously relied on financial data alone to make strategic decisions, the dawn of the digital age has brought new meaning to non-financial data.

The new world of algorithm-powered machines

The traditional approach of forecasting is highly manual and time-consuming. People spend a significant amount of time gathering, compiling and manipulating data in spreadsheets.

Most of the time, the data used to predict the future and create forecasts is historical financial data residing in the company’s ERP systems.

Unfortunately, in today’s rapidly changing world the future doesn’t sufficiently resemble the past.

As the new digital era continue to unfold, more and more data (financial, operational and external) will increasingly become available to support business forecasting.

Given that the traditional approach of forecasting leverages data in structured format to prepare forecasts, with more and more unstructured data available, CFOs and their teams have to rethink the old school forecasting process.

In order to increase the agility of the business to proactively respond to competitor activities, customer, market and industry changes that threaten the achievement of set objectives, or trends that present specific opportunities, the organization should consider all types of data at its disposal and discern what is important and what is not for business performance forecasting purposes.

Artificial Intelligence, machine learning, deep learning and natural language processing are disrupting traditional business operating models and companies are increasingly tapping into these new technologies to drive forecasting processes. These highly powered machines use statistical algorithms and modern computing capabilities to collect, store, and analyze large quantities of data and predict what is likely to happen in the future.

The algorithms are fed with warehouses of historical company and market data and taught to mimic human intelligence. Overtime, through learning, forecasting accuracy is improved.

In addition, NLP algorithms are able to go through a myriad of documents including articles, social posts and other correspondences written in plain text and extract insights that can be injected into the forecasting model.

Humans and machines augment each other

It is no secret that machines have a superior advantage over humans when it comes to collecting, storing and analyzing large data sets in real-time. But does this imply that decision makers should rely exclusively on machine intelligence to drive business decision making? The simple answer is no.

When it comes to applying critical thinking and judgement, human beings are much better than machines. Humans are able to evaluate and translate the machine’s conclusions into decisions and actions. Take for instance the forecasting models that are used to predict the future, the best source of information for these models are the domain experts for whom the models are designed.

The domain experts have a better understanding of the models, what assumptions to base the models on including the ability to uncover flaws that others may miss. Software developers, data scientists, AI experts and automation engineers, among others rely on expert judgement of domain experts to hard-code data features in databases that are used to train predictive algorithms.

In one of my articles, Applying Design Thinking to Finance, I highlighted how companies are heavily dependent on analytical thinking in order to drive business performance.

The solution is not to embrace the randomness of intuitive thinking and avoid analytical thinking completely. The solution lies in the organization embracing both approaches, turn away from the false certainty of the past, and instead peer into a mystery to ask what could be

The fact that the past is not a reliable predictor of the future does not necessarily mean that it is not important. History has been known to provide major lessons to us. In the same manner, human judgement can be used to determine which historical data is suitably representative of the future to be included in forecasting decisions.

When data is abundant and the relevant aspects of the business world aren’t fast-changing, it’s appropriate to lean on statistical methods to prepare forecasts. However, even after the forecasting model has been designed and adopted, human judgement is still required to evaluate the suitability of the model’s prediction under different scenarios.

Important to note is that predictive models do no more than combine the pieces of information fed to them. These machines are good at identifying trends and imitating human reasoning. If bad or erroneous data, or good but biased data is presented to the algorithms, issues can arise.

Setting aside human biases

People make decisions based on logic, emotion and instincts. One of the challenges of preparing forecasts in a complex and constantly changing world is setting aside human biases.

Subconsciously, human beings have a tendency to base judgement and forecasts on systematically biased mental heuristics rather than vigilant assessment of facts. System 1 Thinking.

According to Daniel Kahneman in his book Thinking Fast and Slow:

  • System 1 is an automatic, fast and often unconscious way of thinking. It is autonomous and efficient, requiring little energy or attention, but is prone to biases and systematic errors.
  • System 2 is an effortful, slow and controlled way of thinking. It requires energy and can’t work without attention but, once engaged, it has the ability to filter the instincts of System 1.

Personal experiences built overtime make us overgeneralize facts and jump to conclusions. Instead of focusing on both existing and absent evidence, we act as if the evidence before us is the only information relevant to the decision at hand.

As a result, the risks of options to which we are emotionally inclined are downplayed, and our abilities and the accuracy of our judgement are also overestimated.

Further, a focus on the limited available evidence causes us to create coherent stories about business performance including causal relationships that are non-existent. We are quick to ignore or fail to seek evidence that runs contrary to the coherent story we have already created in our mind.

Such actions do not result only in overconfident judgement but also cause us to be overly optimistic and create plans and forecasts that are unrealistically close to best-case scenarios.

By addressing own cognitive biases and enabling collaboration between humans and machines, business forecasters will be empowered to create forecasts that enable faster and more confident decision making.

Machines can only assist and not displace the typically human ability to make critical judgement under uncertainty.

Sharing is caring: