TagDecision Making

Analytics and AI: Humans and Machines are Good at Different Aspects of Prediction

Mostly driven by growth in the IoT, and the widespread use of internet, social media and mobile devices to perform search, send text, email and capture images and videos, the amount of data that we are producing on a daily basis is startling.

Consequently, companies are turning to data analytics and AI technologies to help them make sense of all the data at their disposal, predict the future, and make informed decisions that drive enterprise performance.

Although adoption of analytics and AI systems is increasingly extending in more mission-critical business processes, the implications of these emerging technologies on busines strategy, management, talent and decisions is still poorly understood.

For example, the single most common question in the AI debate is: “Will adoption of AI by businesses lead to massive human job cuts?”

Borrowing lessons from historical technological advances, yes, certain jobs will be lost, and new ones also created. However, machines are not taking over the world, nor are they eliminating the need for humans in the workplace.

Jobs will still be there albeit different from the traditional roles many are accustomed to. The majority of these new roles will require a new range of education, training, and experience.

For instance, nonroutine cognitive tasks demanding high levels of flexibility, creativity, critical thinking, problem-solving, leadership, and emotional intelligence do not yet lend themselves to wholesale automation.

Analytics and AI rely on data to make predictions

As more and better data is continually fed to the machine learning algorithms, the more they learn, and improve at making predictions.

Given these applications search for patterns in data, any inaccuracies or biases in the training data will be reflected in subsequent analyses.

But how much data do you need? The variety, quality and quantity of input, training and feedback data required depends on how accurate the prediction or business outcome must be to be useful.

Training data is used to train the predictive algorithms to predict the target variable, while the feedback data is used to assess and improve the algorithm’s prediction performance.

Undoubtedly, advanced analytics and AI systems are only as good as the data they are trained on. The data used to train these learning algorithms must be free of any noise or hidden biases.

You therefore need to understand how predictive technologies learn from data to perform sophisticated tasks such as customer lifetime value modeling and profitability forecasting.

This helps guide important decisions around the scale, scope and frequency of data acquisition. It’s about striking a balance between the benefits of more data and the cost of acquiring it.

Humans and machines both have shortcomings

In the context of prediction, humans and machines both have recognizable strengths and weaknesses.

Unless we identify and differentiate which tasks humans and machines are best suited for, all analytics and AI investments will come to naught.

For instance, faced with complex information with intricate interactions between different indicators, humans perform worse than machines. Heuristics and biases often get in the way of making accurate predictions.

Instead of accounting for statistical properties and data-driven predictions, more emphasis is often placed on salient information unavailable to prediction systems.

And, most of the times, the information is deceiving, hence the poor performance.

Although machines are better than humans at analyzing huge data sets with complex interactions amidst disparate variables, it’s very crucial to be cognizant of situations where machines are substandard at predicting the future.

The key to unlocking valuable insights from predictive analytics investments involves first and foremost understanding the definite business question that the data needs to answer.

This dictates your analysis plan and the data collection approaches that you will choose. Get the business question wrong, conclusively expect the insights and recommendations from the analysis to also be wrong.

Recall, with plentiful data, machine predictions can work well.

But, in situations where there is limited data to inform future decision making, machine predictions are relatively poor.

To quote Donald Rumsfeld, former US Secretary of Defense:

There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.

Donald Rumsfeld, former US Secretary of Defense

Thus, for known knowns, abundant data is readily available. Accordingly, humans trust machines to do a better than them. Even so, the level of trust changes the moment we start talking about known unknowns and unknown unknowns.

With these situations, machine predictions are relatively poor because we do not have a lot of data to ingest into the prediction model.

Think of infrequent events (known unknowns) that occur once in a while, or something that has never happened before (unknown unknowns).

At least for infrequent events or happenings, humans are occasionally better at predicting with little data.

Generally so because we are good at comparison and applying prudent judgement, examining new situations and identifying other settings that are comparable to be useful in a new setting.

We are naturally wired to remember key pieces of information from the little data available or the limited associations we have had in the past.

Rather than be precise, our prediction comes with a confidence range highlighting its lack of accuracy.

Faced with unknown unknowns, both humans and machines are relatively bad at predicting their arrival.

The simple truth is that we cannot predict truly new events from past data. Look no further than the current Brexit conundrum.

Nobody precisely knew the unintended consequences of the UK leaving the EU. Leavers and Remainers both speculated as to what the benefits and disadvantages of leaving the EU maybe.

Of course, nobody knows what will happen in the future but that doesn’t mean we can’t be prepared, even for the unknown unknowns.

In their book Prediction Machines: The Simple Economics of Artificial Intelligence, Ajay Agrawal, Joshua Gans, and Avi Goldfarb present an additional category of scenarios under which machines also fail to predict precisely – Unknown Knowns.

Per the trio:

Unknown knowns is when an association that appears to be strong in the past is the result of some unknown or unobserved factor that changes over time and makes predictions we thought we could make unreliable.

PREDICTION MACHINES: THE SIMPLE ECONOMICS OF ARTIFICIAL INTELLIGENCE

With unknown knowns, predictive tools appear to provide a very accurate answer, but that answer can be very incorrect, especially if the algorithms have little grasp of the decision process that created the data.

To support their point of view, the authors make reference to pricing and revenue analysis in the hotel industry, although the same viewpoint is applicable elsewhere.

In many industries, higher prices are analogous to higher sales, and vice versa.

For example, in the airline industry, airfares are low outside the peak season, and high during peak seasons (summer and festive) when travel demand is highest.

Presented with this data, and without an understanding that price movements are often a function of demand and supply factors, a simple prediction model might advocate raising different route airfares to sell more empty seats and increase revenues. Evidence of causal inference problems.

But, a human being with a solid understanding of economics concepts will immediately call attention to the fact that increasing airfares is unlikely to increase flight ticket sales.

To the machine, this is an unknown known. But to a human with knowledge of pricing and profitability analysis, this is a known unknown or maybe even a known known provided the human is able to properly model the pricing decision.

Thus, to address such shortcomings, humans should work with machines to identify the right data and appropriate data analysis models that take into consideration seasonality and other demand and supply factors to better predict revenues at different prices.

As data analytics and AI systems become more advanced and spread across industries, and up and down the value chain, companies that will progress further are those that are continually thinking of creative ways for machines to integrate and amplify human capabilities.

In contrast, those companies that are using technology simply to cut costs and displace humans will eventually stop making progress, and cease to exist.

Plan Continuation Bias and Decision Making

Businesses of all shapes and sizes often operate in a system with the different parts of the system interacting with one another to produce effects or outcomes that are anticipated or not.

Some systems are linear with easily predictable outcomes. Other systems are more complex, more like a spider web, with many of their parts intricately linked and easily affecting one another.

Understanding the pertinent system dynamics is therefore critical for making better, informed decisions.

Unfortunately, the business environment in which key performance decisions are made on a regular basis is not linear and the outcomes easily predicted.

Businesses inhabit and operate in environments that consist of interdependent networks in relationships which connect with and interact with each other to produce outcomes.

In their book Meltdown: Why Our Systems Fail and What We Can Do About It, Chris Clearfield and András Tilcsik saw that even seemingly unrelated parts in a system are connected indirectly, and some subsystems are linked to many parts of the system. In the unfortunate event of something going wrong, problems show up everywhere, making it difficult to understand what’s happening.

So what do complex systems have to do with plan continuation bias?

We can plan for the future, but we don’t have a crystal ball to predict accurately what will happen next week, month, quarter or year. Based on past experiences, and analysis of data, we might have an idea but that is all there is.

In spite of the past not always being the best predictor of future performance, it’s surprising to see the level of system blindness that decision makers still exhibit.

Let’s look at a decision to enter a new international market as an example. The strategy and factors that contributed to success in one market by all means do not guarantee success in a different market.

Instead of following a step-by-step approach to better understand the system dynamics of the new market, most of the time leaders adopt a copy and paste approach resulting in widespread failures.

Even though there are tell-tell signs of the expansionary move heading sideways, leaders push ahead by implementing the bad strategy. An indication that plan continuation bias is probably the contributing factor.

Simply put, plan continuation bias is the tendency we all human beings have to continue on the path we have already chosen or fallen into, or pursue a decision we have made without rigorously checking whether that is still the best decision or not.

In business decisions, this form of bias is prevalent in strategy implementation, project management, as well as forecasting and planning. We often don’t take the time to review the plan against actual results and change course. We persist even when the original plan no longer makes sense.

It could be that the system in which we based our original plan assumptions has significantly changed, ultimately requiring us to take a step back and reflect to better understand what’s going on and decide how to proceed.

Because we are so much fixated on the end goal which seems achievable, we blindly convince ourselves to push ahead or continue even if the current results are telling us otherwise. We continue to pump resources towards the plan or project, eventually resulting in waste and worse results than before.

Sometimes plan continuation bias is a result of the organizational culture. If the culture is one that doesn’t tolerate “bad news” and suppresses speaking up when circumstances change, then chances are high that everyone will become so obsessed with getting there.

Leaders should therefore encourage speaking up, and employees will not be afraid to speak up for fear of being reprimanded.

Rather than reprimand an employee who has identified flaws in the existing plan or discovers an impending project catastrophe, why not publicly praise them given that such a move sends out a message that leaders are open to receiving feedback.

Considering the complexity of today’s business environment and its tight linkages, it might not be feasible to pause each time a key decision is made. You want to avoid a situation where decisions made are more reactive than proactive.

Instead, find a balance between focusing on the tasks or initiatives to be performed and making sense of what is happening. Avoid getting fixated on one or the other.

Making sense of what is happening gives you a chance to notice unexpected threats and figure out what to do about them before things get completely out of hand.

Avoid making key decisions under time pressure and consider all plan possibilities instead of settling just on one.

Human or Machine Intelligence? Augmentation Key to Better Forecasting

Forecasting is an invaluable process for any business. A forecast can play a significant role in driving company success or failure. For example, high forecast accuracy helps a business anticipate changes in the market, identify growth opportunities, reduce risks, analyze  root causes of performance and proactively respond.

On the other hand, forecasts that are poorly designed, based on weak assumptions often result in unintended consequences.

Preparing highly accurate and reliable forecasts to support decision making is one of the major challenges faced by performance management teams across sectors and industries.

Traditionally, business performance forecasters have relied on past performance to predict future performance. In a perfect, static world the formula works well. However, as we all know the world is not static. The only thing that is constant is change.

Volatility, uncertainty, complexity and ambiguity are at an increasingly alarming level. Further, new technologies are transforming how we do our work now and in the future.

A number of manual processes have successfully been automated. Where businesses have previously relied on financial data alone to make strategic decisions, the dawn of the digital age has brought new meaning to non-financial data.

The new world of algorithm-powered machines

The traditional approach of forecasting is highly manual and time-consuming. People spend a significant amount of time gathering, compiling and manipulating data in spreadsheets.

Most of the time, the data used to predict the future and create forecasts is historical financial data residing in the company’s ERP systems.

Unfortunately, in today’s rapidly changing world the future doesn’t sufficiently resemble the past.

As the new digital era continue to unfold, more and more data (financial, operational and external) will increasingly become available to support business forecasting.

Given that the traditional approach of forecasting leverages data in structured format to prepare forecasts, with more and more unstructured data available, CFOs and their teams have to rethink the old school forecasting process.

In order to increase the agility of the business to proactively respond to competitor activities, customer, market and industry changes that threaten the achievement of set objectives, or trends that present specific opportunities, the organization should consider all types of data at its disposal and discern what is important and what is not for business performance forecasting purposes.

Artificial Intelligence, machine learning, deep learning and natural language processing are disrupting traditional business operating models and companies are increasingly tapping into these new technologies to drive forecasting processes. These highly powered machines use statistical algorithms and modern computing capabilities to collect, store, and analyze large quantities of data and predict what is likely to happen in the future.

The algorithms are fed with warehouses of historical company and market data and taught to mimic human intelligence. Overtime, through learning, forecasting accuracy is improved.

In addition, NLP algorithms are able to go through a myriad of documents including articles, social posts and other correspondences written in plain text and extract insights that can be injected into the forecasting model.

Humans and machines augment each other

It is no secret that machines have a superior advantage over humans when it comes to collecting, storing and analyzing large data sets in real-time. But does this imply that decision makers should rely exclusively on machine intelligence to drive business decision making? The simple answer is no.

When it comes to applying critical thinking and judgement, human beings are much better than machines. Humans are able to evaluate and translate the machine’s conclusions into decisions and actions. Take for instance the forecasting models that are used to predict the future, the best source of information for these models are the domain experts for whom the models are designed.

The domain experts have a better understanding of the models, what assumptions to base the models on including the ability to uncover flaws that others may miss. Software developers, data scientists, AI experts and automation engineers, among others rely on expert judgement of domain experts to hard-code data features in databases that are used to train predictive algorithms.

In one of my articles, Applying Design Thinking to Finance, I highlighted how companies are heavily dependent on analytical thinking in order to drive business performance.

The solution is not to embrace the randomness of intuitive thinking and avoid analytical thinking completely. The solution lies in the organization embracing both approaches, turn away from the false certainty of the past, and instead peer into a mystery to ask what could be

The fact that the past is not a reliable predictor of the future does not necessarily mean that it is not important. History has been known to provide major lessons to us. In the same manner, human judgement can be used to determine which historical data is suitably representative of the future to be included in forecasting decisions.

When data is abundant and the relevant aspects of the business world aren’t fast-changing, it’s appropriate to lean on statistical methods to prepare forecasts. However, even after the forecasting model has been designed and adopted, human judgement is still required to evaluate the suitability of the model’s prediction under different scenarios.

Important to note is that predictive models do no more than combine the pieces of information fed to them. These machines are good at identifying trends and imitating human reasoning. If bad or erroneous data, or good but biased data is presented to the algorithms, issues can arise.

Setting aside human biases

People make decisions based on logic, emotion and instincts. One of the challenges of preparing forecasts in a complex and constantly changing world is setting aside human biases.

Subconsciously, human beings have a tendency to base judgement and forecasts on systematically biased mental heuristics rather than vigilant assessment of facts. System 1 Thinking.

According to Daniel Kahneman in his book Thinking Fast and Slow:

  • System 1 is an automatic, fast and often unconscious way of thinking. It is autonomous and efficient, requiring little energy or attention, but is prone to biases and systematic errors.
  • System 2 is an effortful, slow and controlled way of thinking. It requires energy and can’t work without attention but, once engaged, it has the ability to filter the instincts of System 1.

Personal experiences built overtime make us overgeneralize facts and jump to conclusions. Instead of focusing on both existing and absent evidence, we act as if the evidence before us is the only information relevant to the decision at hand.

As a result, the risks of options to which we are emotionally inclined are downplayed, and our abilities and the accuracy of our judgement are also overestimated.

Further, a focus on the limited available evidence causes us to create coherent stories about business performance including causal relationships that are non-existent. We are quick to ignore or fail to seek evidence that runs contrary to the coherent story we have already created in our mind.

Such actions do not result only in overconfident judgement but also cause us to be overly optimistic and create plans and forecasts that are unrealistically close to best-case scenarios.

By addressing own cognitive biases and enabling collaboration between humans and machines, business forecasters will be empowered to create forecasts that enable faster and more confident decision making.

Machines can only assist and not displace the typically human ability to make critical judgement under uncertainty.

© 2019 ERPM Insights

Theme by Anders NorénUp ↑