News Score: Score the News, Sort the News, Rewrite the Headlines

Why language models hallucinate

At OpenAI, we’re working hard to make AI systems more useful and reliable. Even as language models become more capable, one challenge remains stubbornly hard to fully solve: hallucinations. By this we mean instances where a model confidently generates an answer that isn’t true. Our new research paper⁠(opens in a new window) argues that language models hallucinate because standard training and evaluation procedures reward guessing over acknowledging uncertainty.ChatGPT also hallucinates. GPT‑5 ha...

Read more at openai.com

© News Score  score the news, sort the news, rewrite the headlines