Google, Amazon, Cohere and Mistral are among those trying to bring down the rate of these fabricated answers by rolling out technical fixes, improving the quality of the data in AI models, and building verification and fact-checking systems across their generative AI products.
“Hallucinations are a very hard problem to fix because of the probabilistic nature of how these models work.”
https://www.ft.com/content/7a4e7eae-f004-486a-987f-4a2e4dbd34fb
