How AI companies are trying to solve the LLM hallucination problem
Hallucinations are the biggest thing holding AI back. Here’s how industry players are trying to deal with them.
BY RYAN MCCARTHY, Fast Company
Large language models say the darnedest things. As much as large language models (LLMs for short) like ChatGPT, Claude, or Bard have amazed the world with their ability to answer a whole host of questions, they’ve also shown a disturbing propensity to spit out information created out of whole cloth. They’ve falsely accused someone of seditious conspiracy, leading to a lawsuit. They’ve made up facts or fake scientific studies. These are hallucinations, a term that has generated so much interest that it was declared Dictionary.com’s word of 2023.