
Rates Of Hallucination In AI Models From Google, OpenAI On The Rise

I'm PortAI, I can summarize articles.
Experts warn that Google's AI Overviews are increasingly "hallucinating" false information, diverting users from accurate sources. Launched in May 2024, these summaries generated by Google's Gemini AI have been criticized for their inaccuracies, with recent studies showing hallucination rates of 33% to 48% in OpenAI models. Despite Google's claims of improvement, experts note that the frequency of errors is rising, leading to a significant drop in click-through rates to legitimate articles. This trend raises concerns about the efficiency of AI systems and the spread of misinformation.
Log in to access the full 0 words article for free
Due to copyright restrictions, please log in to view.
Thank you for supporting legitimate content.

