© 2025 KUNR
Illustration of rolling hills with occasional trees and a radio tower.
Serving Northern Nevada and the Eastern Sierra
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
If you’re experiencing technical difficulties listening to KNCJ 89.5 FM, you can listen live here on KUNR.org or click here to download the KUNR app.

Why AI models hallucinate

Artificial intelligence chatbots will confidently give you an answer for just about anything you ask them. But those answers aren’t always right.

AI companies call these confident, incorrect responses “hallucinations.” Researchers at OpenAI have been digging into why large language models hallucinate, and say part of the problem is that rankings of AI models reward guesses while penalizing uncertainty.

Here & Now‘s Scott Tong speaks with Ina Fried, chief technology correspondent for Axios.

This article was originally published on WBUR.org.

Copyright 2025 WBUR

Here & Now Newsroom