To address the problem of AIs generating inaccurate information, a team of ethicists says there should be legal obligations for companies to reduce the risk of errors, but there are doubts about whether it would work
By Chris Stokel-Walker
7 August 2024
AI chatbots are being quickly rolled out for a wide range of functions
Andriy Onufriyenko/Getty Images
Can artificial intelligence be made to tell the truth? Probably not, but the developers of large language model (LLM) chatbots should be legally required to reduce the risk of errors, says a team of ethicists.
“What we’re just trying to do is create an incentive structure to get the companies to put a greater emphasis on truth or accuracy when they are creating the systems,” says Brent Mittelstadt at the University of Oxford.
Read more
How does ChatGPT work and do AI-powered chatbots “think” like us?
Advertisement
LLM chatbots, such as ChatGPT, generate human-like responses to users’ questions, based on statistical analysis of vast amounts of text. But although their answers usually appear convincing, they are also prone to errors – a flaw referred to as “hallucination”.
“We have these really, really impressive generative AI systems, but they get things wrong very frequently, and as far as we can understand the basic functioning of the systems, there’s no fundamental way to fix that,” says Mittelstadt.
This is a “very big problem” for LLM systems, given they are being rolled out to be used in a variety of contexts, such as government decisions, where it is important they produce factually correct, truthful answers, and are honest about the limitations of their knowledge, he says.