• Research
  • Consulting
  • Creative
  • Training
  • Blog
  • Contact us
  • Menu Menu

AI is great – if accuracy is optional

June 6, 2025

Significant players in tech seem to have adopted AI as a religion. Well-known tech titans insist AI can replace all human jobs while companies demand employees use it (presumably to replace themselves). It will solve the world’s problems, they tell us, and will soon become sentient and lead us all to a deeper understanding of, well… everything.

Unfortunately, as with much of AI, the facts don’t support the conclusions. Here’s a sampling of how it’s going:

A NewScientist magazine article entitled, “AI hallucinations are getting worse – and they’re here to stay,” reports that despite so-called “reasoning upgrades,” AI chatbots have grown less reliable. “An OpenAI technical expert evaluating its latest LLMs showed that its 03 and 04 mini models… had significantly higher hallucination rates that the company’s previous 01 model.”  For instance, in summarizing factual material, the new models had a 33% hallucination rate; the new had a 48% rate.

Another study from the Tow Center for Digital Journalism examined eight AI search engines. They used quoted text from the article as the search term and asked the engines to provide the source article, the news organization, and the URL. As you can see from the chart below, it didn’t go well.

 

The magazine of the Institute of Electrical and Electronics Engineers (IEEE) titled their piece, “Why You Can’t Trust Chatbots—Now More Than Ever: Even after language models were scaled up, they proved unreliable on simple tasks.”

After noting the infamous unreliability of LLMs like ChatGPT, the article states, “A common assumption is that scaling up the models driving these applications will improve their reliability—for instance, by increasing the amount of data they are trained on, or the number of parameters they use to process information. However, more recent and larger versions of these language models have actually become more unreliable, not less, according to a new study.”

Almost comically, the less reliable LLMs display more confidence in their ignorance. “…more recent models [are] significantly less likely to say that they don’t know an answer, or to give a reply that doesn’t answer the question. Instead, later models are more likely to confidently generate an incorrect answer.” Sounds like some old bosses of mine.

This would be amusing if the stakes weren’t so high. The article goes on to recite the business risks of relying on AI with an astonishing error rate. It’s the old ‘garbage in/garbage out.’ Rely on bad information and you make bad decisions. The very act of having to review and verify AI outputs is cited as a productivity-killer.

Finally, there’s “model collapse.” That’s what happens when AIs are trained on older AI-generated content. “Over time,” wrote Bernard Marr in Forbes, “this recursive process causes the models to drift further away from the original data distribution, losing the ability to accurately represent the world as it really is. Instead of improving, the AI starts to make mistakes that compound over generations, leading to outputs that are increasingly distorted and unreliable.”

AI even fails significantly on news summaries, which you’d think would be fairly straightforward. The BBC examined AI Assistants’ news summaries of its content. They found the following:

  • 51% of all AI answers to questions about the news were judged to have significant issues of some form.
  • 19% of AI answers which cited BBC content introduced factual errors – incorrect factual statements, numbers and dates.
  • 13% of the quotes sourced from BBC articles were either altered from the original source or not present in the article cited.

AI may be tech’s new religion. But you might say they’re asking us to worship false idols. AI trained on hordes of data scooped from any and every source is not living up to the hype. Perhaps AI is, after all, not an end, but a means—a pattern recognition engine on steroids that has functional use within closed systems – like specific medical specialties, fixed regulatory data, or airport schedules. Alas, tech mavens may have to look elsewhere for salvation.

 

Leonce Gaiter – Vice-President, Content & Strategy

Share
  • Facebook Facebook Share on Facebook
  • X-twitter X-twitter Share on X
  • Linkedin Linkedin Share on LinkedIn
  • Mail Mail Share by Mail
You might also like
Why Trust Matters More Than Ever in an AI-Driven World
College in the Age of AI: What I Saw and Learned
clapping for business leader Engage Your Customers; Unlock Your Future Potential
artificial intelligence concept AI Is Just as Biased as We Are
chatbot illustration Chatbots, Siri and Alexa
evolve or repeat People, Process, Technology

Contact Us

Oregon

6279 SE Genrosa Street
Hillsboro, OR 97123
Tel: 425.638.3797
Email: davids@idebamarketing.com

Recent Posts

  • A Year of Giving Back: Ideba’s 2025 Volunteering Recap
  • A unique perspective on AI
  • Giving Back in Pigeon Forge: Our Annual Business Review with Purpose
  • Can music shape mood and productivity?
  • Ahead of the Curve: Defining an AI Position Before the Roadmap Is Clear

Archives

  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025
  • April 2025
  • March 2025
  • February 2025
  • January 2025
  • December 2024
  • November 2024
  • October 2024
  • September 2024
  • August 2024
  • July 2024
  • June 2024
  • May 2024
  • April 2024
  • March 2024
  • February 2024
  • January 2024
  • December 2023
  • November 2023
  • October 2023
  • September 2023
  • August 2023
  • July 2023
  • June 2023
  • May 2023
  • April 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • December 2019
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • April 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017

Ideba is a consulting, research and creative firm focused on providing measurable benefits to our clients while creating positive change in the communities in which we do business. We do not define our success principally on the bottom line, but on the success we create for our customers.

Contact us SVG Image
  • Home
  • Research
  • Consulting
  • Creative
  • Training
  • Blog
  • Contact us
Read our blog

Your customers don’t just want data. They want direction.

SVG Image
Get the latest

Subscribe to our quarterly newsletter

  • LinkedIn
  • Vimeo
Scroll to top Scroll to top Scroll to top