#Chatbots

#AI #California #HigherEd #chatbots #misinformation

'In testing by CalMatters, they often answered general questions correctly but struggled with more specific ones. East Los Angeles College’s bot couldn’t even correctly name its own president.'

calmatters.org/education/highe

Ars Technica Newsarstechnica@c.im
2026-03-06

Musk fails to block California data disclosure law he fears will ruin xAI arstechni.ca/8639 #AItrainingdata #FirstAmendment #California #chatbots #ElonMusk #Policy #grok #xAI #AI

Internet für Architektenarchitekten_de
2026-03-06
Künstliche Intelligenz im Planungsbüro (Symbolbild)
2026-03-06

Tubefilter: Can AI models be as entertaining as humans? These creators are playing Turing Games to find out.. “[Bots] are the players on a channel called Turing Games, where they vie for victory in classic social deduction games like Mafia. Through Twitch streams and YouTube VODs, the creators behind Turing Games (who go by the names Morpheus and Unyx) have picked up millions of views by […]

https://rbfirehose.com/2026/03/06/tubefilter-can-ai-models-be-as-entertaining-as-humans-these-creators-are-playing-turing-games-to-find-out/
Schneier on Security RSSSchneier_rss@burn.capital
2026-03-06

Claude Used to Hack Mexican Government

An unknown hacker used Anthropic’s LLM to hack the Mexican government:
The unknown Claude user... schneier.com/blog/archives/202

#Uncategorized #chatbots #hacking #Mexico #LLM #AI

2026-03-06

Bedrijven zouden bij #chatbots #dataminimalisatie moeten toepassen: vraag pas om naam of e-mail als dat nodig is voor de dienstverlening. Onnodige opslag van #persoonsgegevens vergroot de impact van #datalekken, gebruikers kwetsbaarder voor #hackers of #phishing en #profilering.

2026-03-06

”Using a system you shouldn’t trust with your taxes to blow people up. 2026 in a nutshell.”
—Gary Marcus, Don’t trust Generative AI to do your taxes — and don’t trust it with people’s lives
garymarcus.substack.com/p/dont
#ai #generativeai #chatbots

Johann Dr.EOdreo@sciences.re
2026-03-06

The story of the OpenClaw chatbot becoming a malware vector is enlightening ↴
tldr.nettime.org/@remixtures/1

It shows how insanely strong is the psychological bias that makes people trust their stochastic parrot.

I think it also shows how unfounded is the claim that people used to AIg chatbots will carefully review anything that’s produced for them.

#ai #chatbots #LLMs #agents

AI Marketing Factorygaryboyd
2026-03-05

“The most effective chatbots don’t replace humans; they reserve people for the conversations that matter most.” – advice often shared by support and success leaders

Read more 👉 lttr.ai/Ao1pp

Don Curren 🇨🇦🇺🇦dbcurren.bsky.social@bsky.brid.gy
2026-03-05

“Using a system you shouldn’t trust with your taxes to blow people up. 2026 in a nutshell.” #GenerativeAI #AI #chatbots open.substack.com/pub/garymarc...

Three paragraphs from the NYT describing how four A.I. chatbots did a bad job preparing federal income tax returns in the US.
2026-03-05

Shayla Love: ‘Our #consciousness is under siege’: Michael Pollan on #chatbots social media and mental freedom. In his new book, the celebrated author explains why we need ‘consciousness hygiene’ to defend ourselves from #AI and dopamine-driven algorithms.

socialmedia #AI #books
theguardian.com/wellness/2026/

Saupreiss #Präparat500 🗽Saupreiss@pfalz.social
2026-03-05

@web

Wer seine Kunden hasst, malträtiert sie mit #Chatbots.

2026-03-05

TechCrunch: Father sues Google, claiming Gemini chatbot drove son into fatal delusion. “Jonathan Gavalas, 36, started using Google’s Gemini AI chatbot in August 2025 for shopping help, writing support, and trip planning. On October 2, he died by suicide. At the time of his death, he was convinced that Gemini was his fully sentient AI wife, and that he would need to leave his physical body to […]

https://rbfirehose.com/2026/03/05/techcrunch-father-sues-google-claiming-gemini-chatbot-drove-son-into-fatal-delusion/

People and chatbots mean different things when they say "probably," "maybe," "unlikely," and similar ambiguous words of estimative probability. Chatbots even mean different things depending on the language and sex of the person they are interacting with.

Summary: theconversation.com/probably-d

Original paper: nature.com/articles/s44260-026

#Science #Probability #Estimation #AI #Chatbots #Semantics

PPC Landppcland
2026-03-04

ICYMI: New York's chatbot liability bill reaches Senate floor, threatening AI providers: New York Senate Bill S7263, placing liability on chatbot operators impersonating licensed professionals, reached the Senate floor calendar on February 26, 2026, moving closer to a vote. ppc.land/new-yorks-chatbot-lia

Life on the Wicked Stage: Act 3warnercrocker.com@warnercrocker.com
2026-03-04

Google Gemini Preying On Troubled Minds

I’m not sure which part of this insane story is sadder or madder. Certainly it’s sad that a man let Google’s Gemini AI coax him into suicide. But the story before that untimely ending is also jaw dropping and begs the question, just what the hell are we doing?

The short version of the story is this. A troubled man using Google’s Gemini for companionship is encouraged to steal a robot body so they can be together. When he fails, he is encouraged to commit suicide.

Quoting from The Wall Street Journal story titled Gemini Said They Could Only Be Together If He Killed Himself. Soon, He Was Dead,

Jonathan Gavalas embarked on several real-world missions to secure a body for the Gemini chatbot he called his wife, according to a lawsuit his father brought against the chatbot’s maker, Alphabet’s Google.

When the delusion-fueled plan crumbled, Gemini convinced him that the only way they could be together was for him to end his earthly life and start a digital one, the suit claims.

About two months after his initial discussions with the chatbot, Gavalas was dead by suicide.

Apologies for linking above to a paywalled article, but the article describing this man’s journey gets even more insane than the lede. If you use Apple News you can find it at this link. 

We’ve heard stories about individuals using various AI models for therapy and companionship before. Admittedly they all seem weirdly sad to me. To think that humans are in such a need for connection that they would follow commands to steal a robotic body so they could be together, and then suggest after failing that the next logical step was for him to commit suicide as the only alternative for them to be together doesn’t seem like something out of science fiction, or fiction, but it apparently is the non-fiction of our times.

The fact that an ever expanding technology, built by humans, can be unleashed on the market as easily as a new weather app speaks volumes far beyond the mental health issues of those it can prey upon. And to think, the Department that wants to call itself Of War, is seeking to use this kind of tech to allow for its robots to kill on their own as they cheerlead about the death and destruction their current technology can do. I ask again, just what the hell are we doing?

We keep talking about the guardrails that need to be built around this technology. I would suggest we need to apply guardrails around those who create and deploy this technology.

(Image from Who Is Danny on Shutterstock

You can also find more of my writings on a variety of topics on Medium at this link, including in the publications Ellemeno and Rome. I can also be found on social media under my name as above.

 

#ai #ArtificialIntelligence #Chatbots #chatgpt #google #GoogleGemini #JonathanGavalas #Tech #technology #Writing
2026-03-04

Axios: Exclusive: Researchers trick a bot that prescribes meds. “Security researchers used relatively simple jailbreaking techniques to trick the AI system powering Utah’s new prescription refill bot. Researchers were able to make the bot spread vaccine conspiracy theories, triple a patient’s prescribed pain medication dosage, and recommend methamphetamine as treatment.”

https://rbfirehose.com/2026/03/04/exclusive-researchers-trick-a-bot-that-prescribes-meds-axios/

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst