#aihallucinations

2025-06-21
What AI Hallucinations may say about how we ask questions, and how they may be more useful than we think they are. https://www.martinbihl.com/business-thinking/ai-hallucinations #aihallucinations #AI #artificialintelligence
Martin Bihlmartinbihl
2025-06-21

What AI Hallucinations may say about how we ask questions, and how they may be more useful than we think they are. martinbihl.com/business-thinki

Bibliolater 📚 📜 🖋bibliolater@qoto.org
2025-06-21

💻 **AI hallucinates more frequently as it gets more advanced — is there any way to stop it from happening, and should we even try?**

"_Research conducted by OpenAI found that its latest and most powerful reasoning models, o3 and o4-mini, hallucinated 33% and 48% of the time, respectively, when tested by OpenAI's PersonQA benchmark._"

🔗 livescience.com/technology/art.

#AI #ArtificialIntelligence #Technology #Tech #AIHallucinations @ai

2025-06-18

ZDNet: Your favorite AI chatbot is full of lies. “Don’t let their creators get away with calling these responses ‘hallucinations.’ They’re flat-out lies, and they are the Achilles heel of the so-called AI revolution. Those lies are showing up everywhere. Let’s consider the evidence.”

https://rbfirehose.com/2025/06/18/zdnet-your-favorite-ai-chatbot-is-full-of-lies/

Ars Technica Newsarstechnica@c.im
2025-06-12

AI Overviews hallucinates that Airbus, not Boeing, involved in fatal Air India crash arstechni.ca/tx3y #ArtificialIntelligence #aihallucinations #aisearch #Google #google #search #Tech #AI

2025-06-06

Chicago Sun-Times: Special section with fake book list plagued with additional errors, Sun-Times review finds. “A review by the Sun-Times newsroom of the 64-page special section found the errors extended far beyond the mostly fake summer reading list, with more misinformation plaguing other articles in the edition. The newsroom fact-checked all 10 stories with named sources and each of the […]

https://rbfirehose.com/2025/06/06/chicago-sun-times-special-section-with-fake-book-list-plagued-with-additional-errors-sun-times-review-finds/

N-gated Hacker Newsngate
2025-06-05

Oh, look! We've reinvented the wheel but with a "zero-config human-in-loop" twist! 🚀✨ Because what better way to stop AI hallucinations than by asking Mason Yarbrough to babysit your tech? 🙄🤖
masonyarbrough.com/blog/ask-hu -config

Zuri (he/him) 🕐 CETshaedrich@mastodon.online
2025-06-05

Americans must really think, they are governed by irony: They have an education secretary, who can't
– read ("A1" instead of "#AI")
– do math (trouble with millions and trillions as well as multiplying by 12)
– tell the truth (doesn't know what AI is and fabricates an explanation of a non-existing "A1" instead, just like #AIHallucinations)

#uspol #politics #TrumpAdministration #TrumpAdministration2025 #SecondTrumpAdministration #LindaMcMahon #ArtificialIntelligence #Project2025

2025-06-05

Ars Technica: Unlicensed law clerk fired after ChatGPT hallucinations found in filing. “Last month, a recent law school graduate lost his job after using ChatGPT to help draft a court filing that ended up being riddled with errors. The consequences arrived after a court in Utah ordered sanctions after the filing included the first fake citation ever discovered in the state hallucinated by […]

https://rbfirehose.com/2025/06/05/ars-technica-unlicensed-law-clerk-fired-after-chatgpt-hallucinations-found-in-filing/

N-gated Hacker Newsngate
2025-06-05

🔥 Oh wow, another revelation about AI hallucinations and comprehension! 🤯 Hardly anyone was asking, but thankfully Mike stepped up to the plate with the answers no one knew they needed. 🙄 Get ready to dive deep into yet another thrilling saga of word salad with a side of dense jargon. 🥗✨
mikecaulfield.substack.com/p/d

2025-06-04

Mashable: Google AI Overviews still struggles to answer basic questions and count. “Two staff members at Mashable asked Google other simple questions: ‘Is it Friday?’ and ‘How many r’s are in blueberry?’ It answered both simple questions incorrectly, spitting out that it was Thursday and there was only one r in blueberry, respectively. It’s worth noting that Google’s AI tools previously went […]

https://rbfirehose.com/2025/06/04/mashable-google-ai-overviews-still-struggles-to-answer-basic-questions-and-count/

eicker.news ᳇ tech newstechnews@eicker.news
2025-06-02

#Apple has been testing #LLMs for #Siri with up to 150B parameters, approaching #ChatGPT’s quality. However, concerns over #AIhallucinations have delayed its release. appleinsider.com/articles/25/0 #tech #media #news

2025-05-27

The Guardian: Alabama paid a law firm millions to defend its prisons. It used AI and turned in fake citations. “State officials have praised Butler Snow for its experience in defending prison cases… But now the firm is facing sanctions by the federal judge overseeing Johnson’s case after an attorney at the firm, working with Lunsford, cited cases generated by artificial intelligence – […]

https://rbfirehose.com/2025/05/27/the-guardian-alabama-paid-a-law-firm-millions-to-defend-its-prisons-it-used-ai-and-turned-in-fake-citations/

2025-05-23

University of Tokyo: AI overconfidence mirrors human brain condition . “So-called large language model (LLM)-based agents, such as ChatGPT and Llama, have become impressively fluent in the responses they form, but quite often provide convincing yet incorrect information. Researchers at the University of Tokyo draw parallels between this issue and a human language disorder known as aphasia, […]

https://rbfirehose.com/2025/05/23/university-of-tokyo-ai-overconfidence-mirrors-human-brain-condition/

2025-05-22

Mashable: Welcome to Google AI Mode! Everything is fine.. “It’s The Good Place, in which our late heroes are repeatedly assured that they’ve gone to a better world. A place where everything is fine, all is as it seems, and search quality just keeps getting better. Don’t worry about ever-present and increasing AI hallucinations here in the Good Place, where the word ‘hallucination’ isn’t even […]

https://rbfirehose.com/2025/05/22/mashable-welcome-to-google-ai-mode-everything-is-fine/

HistoPol (#HP) 🏴 🇺🇸 🏴HistoPol
2025-05-21

@Remittancegirl

hit by generating articles for its content

Via @TheGuardian

"Others on social media have pointed out that 👉the use of AI appears to be found throughout the pages of the Chicago Sun-Times summer 2025 section.👈

👉"Chicago Sun-Times confirms AI was used to create reading list of books that don’t exist👈

Outlet calls story, created by freelancer working with one of the newpaper’s content partner[s], a ‘learning moment’"
theguardian.com/us-news/2025/m

2025-05-21

Snopes: Yes, Chicago Sun-Times published AI-generated ‘summer reading list’ with books that don’t exist. “While the Sun-Times published an AI-generated summer reading list, the newspaper’s staff did not generate the list. A freelance writer for King Features, a company owned by media conglomerate Hearst, produced the content for distribution in various media outlets including the Sun-Times.”

https://rbfirehose.com/2025/05/21/snopes-yes-chicago-sun-times-published-ai-generated-summer-reading-list-with-books-that-dont-exist/

doctorambientdoctorambient
2025-05-18

People complain about for "hallucinating." In this context, a hallucination is something that looks like a fact but is actually completely fictitious. (It's a terrible name.)

But here's the thing: every day I talk to humans. The vast majority of humans that I interact with say things that look like facts but are actually completely fictitious.

FWIW, I get many more daily hallucinations from *people* than I do machines.

¯⁠\⁠_⁠(⁠ツ⁠)⁠_⁠/⁠¯

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst