Zillow’s Great Data Science Disaster

Initially starting its journey as a media company, making money by selling ads on its websites, the American company Zillow Group Inc., or simply Zillow, later turned into an online real estate company.

Founded in 2006 by ex-Microsoft execs and founders of Expedia— Rich Barton and Llyod Frink; co-founder of Hotwire.com Spencer Rascoff; David Beitel and Kristin Acker, Zillow started purchasing houses in 2018, claiming to leverage data to make house flipping profitable at scale. In 2019, the company generated $ 2.7 billion.

However, automating everything does not always make sense. In November this year, CEO Barton announced that Zillow would stop purchasing homes – at a time when it already owned 7,000 houses. Additionally, the real estate company has decided to sell all its inventory and lay off about 25 per cent of its 8,000 employees. As a result of this decision, its house acquiring and selling arm— Zillow Offers, lost $ 420 million in 2021’s third quarter.

Also Read:

Today, we dive deep to explore what exactly went wrong with Zillow’s tech.

History says…

Flipping houses involves buying a property at a lower value, spending on improvements and renovations and then selling it at a higher price. What’s tricky here is analyzing and predicting the potential price of the house or property. Zillow wanted to eliminate the whole bidding and closing process when it came to buying and selling houses.

When it first started operations, Zillow built Zestimate, a tool that used data sources to create an approximate value of properties. In 2006 itself, Zillow had a database of approximately 43 million homes. Using this, the real estate company was able to predict the price of housing property at a 14 per cent median absolute per cent error. Going ahead, Zillow acquired data of about 110 million homes, reducing the error rates to five per cent.

While automated valuation tools or methods were not new in the market, Zillow was able to do this on a large scale, and that disrupted the real estate market.

The great fall and reason behind it

Things started going south once Zillow’s prediction model started degrading. This resulted in the company buying properties at a much higher price than they were able to sell them for. In November this year, the company stopped buying houses stating ‘labor-and-supply-constrained economy’ as the reason.

However, Zillow’s downfall may be termed as a data science failure. We evaluate what went wrong with its price prediction model, or as the company calls it, Zestimate.

Machine learning (ML) models perform effectively when they are trained on quality data. When the algorithm is fed with substandard data, the results or predictions will be likewise. In most likelihood, Zillow’s price prediction model was what went wrong. The model was injected with either publicly available data or ones that were made available by its users.

For instance, for property ‘X’ listed for sale, Zestimate might predict the buying price of X as $ 50,000. Since the model is not 100 per cent accurate, a 10 per cent error would mean the actual price of X being $ 45,000. The company already lost $ 5,000 there. To top it off, it would spend all the more on X’s repairs and improvements and then sell it.

Additionally, incorrect data in terms of the number of rooms in X, the size of the property, its distance from schools, hospitals and markets, etc., will all affect the valuation of X. Thus, the company should have a greater focus on the quality of data being used to train the ML model.

Secondly, while algorithms are great to derive helpful insights, they should rely on 100 per cent, especially in cases where chances of uncertainty are on the higher end. The housing property market is volatile and involves a huge monetary impact. A 10 per cent error might lead to a lot of differences. Therefore, when solving problems that come with uncertainty, it is essential to test the changes before relying on algorithms to predict the outcome. Thus, companies solving a data science problem involving high-risk impacts should always have a team overlooking the model outputs.

Summing up

Automation and data science models are extremely helpful in analyzing and providing insights. However, some of them fail as well, like in the case of Zillow.

Earlier, in an interview with media company ZDNet, Chief Analytics Officer at Zillow— Stan Humphries himself said that on any given day, half of all homes that the company transacted were above the Zestimate value, and half were below.

Zillow’s failure, however, doesn’t point towards the challenges associated with the buying and selling of houses at profits, but at how AI and ML might just go wrong when solving real-world problems.

Leave a Reply

Your email address will not be published.