People have become entranced with generative artificial intelligence products. Whether powering a chat bot, image or video generator, or some other software intended to replace or augment human effort, the enthusiastic reception such products receive shows the faith people are putting into them.
Since the introduction of these systems, however, there have been strong criticisms of the results. One of the classic examples is the existence of so-called hallucinations. That is when the statistical nature of storing and retrieving chains of words provides utterly wrong answers.
The companies have tried to improve accuracy and results by constantly scaling up: more data and more computing power and then adjusting results. "It may be taken for granted that as models become more powerful and better aligned by using these strategies, they also become more reliable from a human perspective, that is, their errors follow a predictable pattern that humans can understand and adjust their queries to," write researchers from the University of Cambridge, Cambridge, England; Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridge, UK; and Valencian Research Institute for Artificial Intelligence (VRAIN), Universitat Politècnica de València, Valencia, Spain.
Want to continue reading?
Become a Free ALM Digital Reader.
Once you are an ALM Digital Member, you’ll receive:
- Breaking commercial real estate news and analysis, on-site and via our newsletters and custom alerts
- Educational webcasts, white papers, and ebooks from industry thought leaders
- Critical coverage of the property casualty insurance and financial advisory markets on our other ALM sites, PropertyCasualty360 and ThinkAdvisor
Already have an account? Sign In Now
*May exclude premium content© 2024 ALM Global, LLC, All Rights Reserved. Request academic re-use from www.copyright.com. All other uses, submit a request to [email protected]. For more information visit Asset & Logo Licensing.