Predictive technologies are transforming how we live and work, undoubtedly upending old ways of decision making. Every day, algorithms are significantly influencing our decisions concerning the products to buy, content to consume, movies to watch, investments to make, and whom we date and marry.
In business, they have increasingly become an integral part of supply chain and logistics, marketing, pricing, hiring and recruitment, and risk decision making, ultimately creating opportunities for new business models, product offerings, and strategies to compete.
Understandably, predicting the future is at the core of making decisions under uncertainty. But, in a continuously changing, uncertain, and complex world, to what extent should we trust algorithms and their predictions? When and how do we gain the necessary comfort and allow algorithms to move from a decision support role to critical autonomous decision-makers?
In 2020, Zillow, the online real-estate marketplace company, permanently shut down its iBuying business because its machine learning algorithm couldn’t accurately predict home prices and caused the company to suffer huge financial losses.
Wired magazine published an article that highlighted Why Zillow Couldn’t Make Algorithmic House Pricing Work.
The premise of Zillow’s iBuying business was being able to forecast the price of homes accurately three to six months in advance using Zestimate, the company’s pricing algorithm. The period reflected the amount of time Zillow expected to fix and sell homes it had bought.
Before being put to work, Zestimate was trained on millions of home valuations across the US and analyzed dozens of variables, including the home’s age, size, condition, location, and ZIP code. For a while, the algorithm appeared to get it right until COVID-19 came.
Due to government-mandated lockdowns and restrictions, housing market activity came to a screeching halt in early 2020. Then in late 2021, after governments eased restrictions, activity rebounded.
As part of its goal to buy 5000 homes a month by 2024 and increase market share, Zillow went on a buying spree, this time even looking beyond its initial business model focused on cookie-cutter homes.
The company also added lower quality and more complex homes into the mix, which some experts believe was beyond the bounds of its algorithm’s ability on the premise that Zillow ended up buying other homes at a higher price than the market average.
Not only did the COVID-19 pandemic cripple global economies, but it also led to a shortage of contractors in some parts of the markets the company had bought homes. As a result, this made it hard for Zillow to flip its homes as quickly as it hoped and left the company with an oversupply of overpriced homes.
Ultimately, the company sold homes at lower prices than the algorithm had predicted.
There are plenty of lessons to be learned from Zillow’s iBuying pricing algorithm:
- Even if the right technology, data, people, and processes are in place, the future is not easily predictable. I stand to believe Zillow had all these capabilities in place, but the gyrating markets in 2021 plus other things made it difficult to accurately predict home prices.
- A prediction is not a decision. Rather, effective decision-making involves applying judgment to a prediction and then taking the appropriate course of action. Zillow failed to assess the big picture and consider other factors that affect home valuations that even very advanced algorithms cannot comprehend.
- While an algorithm predicts what is likely to happen, humans are still integral to deciding the required course of actions to take based on their understanding of the objectives to be achieved and the key determinants of business success.
- It’s critical to develop a better, deeper more nuanced understanding of how well an algorithm works in both typical and outlier cases, the types of assumptions and inferences it is making, and the situations in which these assumptions might fail. Zestimate performed well for cookie-cutter homes but failed when complexity was thrown into the mix.
- The same algorithm can have significantly different results or outcomes depending on the context in which it is applied. It’s therefore imperative to evaluate how the algorithm’s predictions or recommendations change with minor modifications to the input, training, or feedback data.
- The same input data can have different results depending on the algorithm that is acting on the data. Would the same input data but a different algorithm to Zestimate have calculated similar valuations and predicted similar selling prices?
- Similar to humans, no algorithms are the same. Accounting for these differences is critical to understanding what drives trust in algorithms and their predictions. In addition to expected results, simple design choices can also cause unintended and complex outcomes.
Data-driven predictions can succeed, and they can fail. So, in an uncertain world, algorithmic thinking alone is not sufficient to make good decisions.
Not everything is known or predictable. Good rules of thumb and intuition are also needed.