Markerr Unveils Generative AI Dashboard
The company claims availability of real-time data with ZIP Code level visibility.
Good, fast, and cheap seems to be the rallying cry for the generative AI industry and many companies are coming in with their offerings.
Real estate data and proptech firm Markerr announced that it was expanding its Data Studio offering with an additional product. The company claims that this product, “Markets,” provides real estate professionals with “comprehensive and granular data to analyze markets” while increasing the potential speed of workflows.
The following are key features of the new product, according to Markerr:
- “Comprehensive and granular market data: Access a vast repository of real-time data, including property prices, rental rates, demographic information, market trends with ZIP Code level visibility.”
- “RealRent 5 Year Rent Forecast: Leverage our machine-learning powered forecast to identify emerging market opportunities, predict future growth, and assess risk factors with precision.”
- “Comparative analysis: Conduct thorough comparisons between multiple markets, evaluate their potential, and gain valuable insights to make informed decisions.”
- “AI Generated Market Summaries and Analysis: Harness the transformative power of generative AI to quickly unlock new insights, understand market dynamics and uncover hidden investment opportunities.”
The AI tools are also supposed to make it easier and faster for users to analyze submarket key performance indicators (KPIs).
But, like any AI technology being put into the market, users need to conduct tests and be careful. Generative AI like ChatGPT holds a lot of promise, but has already seen significant hiccoughs, like fabricating sources of data or needing much more direction and shepherding than the casual user realizes.
It works by being fed large bodies of material, looking for patterns across it, and then developing statistically sophisticated mechanisms to determine more likely answers to questions based on a given context.
But that is not the same as actual thought or the ability to reason. Some, though not all, of the most glaring errors that have appeared among generative AI products has been an inability to handle even simpler math.
Google recently touted an improvement to its generative AI chat system Bard. Generating computer code the company claimed would allow the software to answer math and logic questions more accurately.
Noah Giansiracusa, a tenured associate professor of mathematics and data science at Bentley University, posted on Twitter a test about easy vector math that his brother had run through the system. The software got the answer wrong, even though it generated code that, when executed, could provide the right answer, only Bard still gave the wrong one.
This is not to say that any given company’s generative AI is problematic. There would be no way to know without testing. That is exactly what users should do to be sure they aren’t trusting a system that won’t provide the right information or analysis.