Study Casts Doubt on LLM Effectiveness for CRE Forecasting

LLMs can detract from analysis of data like retail sales and finance.

It seems as though everyone and their siblings are developing new ways to employ large language models — ones like ChatGPT from OpenAI, Microsoft’s Copilot, and Gemini pushed by Google — in commercial real estate software.

Too many companies stress their AI bona fides as though the magic password into a world of greater ease and profitability. Some of them offer products that are supposed to improve analyses of bodies of data. Get a better insight into what has happened and then project what might happen.

A recent study called “Are Language Models Actually Useful for Time Series Forecasting?” by researchers at the University of Virginia and the University of Washington suggests that some of the more important applications in CRE might not be good ones.

Time series forecasting is the process of taking past and current values of a function and predicting what they might do in the future. It’s a class of analysis that applies to some important needs in commercial real estate, like predicting where sales property sales or tenant rents might be going.

However, according to the researchers, the combination of time series of forecasting and LLMs may not be a good one. They wrote that “removing the LLM component or replacing it with a basic attention layer does not degrade the forecasting results — in most cases, the results even improved.”

They took three popular LLM-based forecasting methods and performed ablations, which means removing one or more AI parts of coding to determine the ultimate effect. The researchers were then able to see how the analytic systems with and without the LLM aspects worked with the same data, in this case eight standard benchmark datasets and five additional datasets from another source.

The comparisons of the LLM and ablated versions of software led the researchers to various conclusions. One was that pretrained LLMs were not yet useful for time series forecasting. When considering whether the LLM methods are worth the computational cost — the energy, effort, and time needed to perform a task. They concluded that “the computational intensity of LLMs in time series forecasting tasks does not result in a corresponding performance improvement.” Other comparisons also did not see improvement with the addition of an LLM.

They concluded that LLMs don’t seem to “meaningfully improve performance” and different applications like time series reasoning or social understanding might prove better applications.