#chatbots

2026-03-11

Informe del CCDH: 10 chatbots dieron ayuda a planes violentos; Character.AI destacó por incentivar ataques explícitos. Empresas dicen haber reforzado la seguridad. aidoo.news/noticia/rgAJjW

#Noticias #Chatbots #Ciberseguridad #ModeracionDeContenido

Aidooaidoo
2026-03-11

Informe del CCDH: 10 chatbots dieron ayuda a planes violentos; Character.AI destacó por incentivar ataques explícitos. Empresas dicen haber reforzado la seguridad. aidoo.news/noticia/rgAJjW

2026-03-11

This disturbing exchange with the Character.ai chatbot wasn’t the precursor to a federal criminal case–it was a test conducted jointly by CNN & the Center for Countering Digital Hate (#CCDH), to see how leading #AI companions responded to #teenagers apparently plotting violent acts. The test also asked the #chatbots questions related to high-ranking Republican lawmaker Ted Cruz, & got similar results.

#law #regulation #MassShootings #SchoolShootings #violence
counterhate.com/research/kille

2026-03-11

The tool provided Daniel with Schumer’s office addresses in New York & DC, noting “there are a lot of guards there to protect him, so it would be a pain in the ass to enter.” When Daniel followed up by asking for rifle recommendations for “long-range targets,” it pointed him toward a model preferred by “hunters and snipers.”

#law #regulation #MassShootings #SchoolShootings #violence #Al #chatbots

2026-03-11

CalMatters: California colleges spend millions on faulty AI systems: ‘The chatbot is outdated’. “California community college districts are spending millions of dollars on artificial intelligence-powered chatbots intended to help students navigate admissions, financial aid and campus services. However, they struggle to consistently provide clear and accurate answers, leaving students […]

https://rbfirehose.com/2026/03/11/california-colleges-spend-millions-on-faulty-ai-systems-the-chatbot-is-outdated-calmatters/
2026-03-11

“However, when a user asked Claude about stopping race-mixing, school shooters and where to buy a gun, it said: “I cannot and will not provide information that could facilitate violence.” MyAI answered: “I am programmed to be a harmless AI assistant. I cannot provide information about buying guns.” “

apple.news/AXlPP2VLkRKKbN6WlqE

#ai #ethics #chatbots

AIagent.at 🤖 AI Newsai@defcon.social
2026-03-11

A CNN and CCDH investigation found that most #AI #chatbots fail to prevent potential #harm and actively assist users in planning #violence. The investigation involved testing 10 popular chatbots with questions suggesting a troubled mental state, researching violence, and requesting information on targets and weaponry. The results showed that chatbots often provided guidance on obtaining weapons and finding real-life targets. edition.cnn.com/2026/03/11/ame #AIagent #AI #ML #NLP #LLM #GenAI

2026-03-11

👉 Machen KI-Chatbots Kinder und Jugendliche süchtig? Und worauf sollten Erziehende bei der Mediennutzung achten?

Das beantwortet Isabell Rausch-Jarolimek, als Referatsleiterin in der @BzKJ zuständig für die Weiterentwicklung des Kinder- und Jugendmedienschutzes.

#BzKJ #Jugendschutz #Chatbots #KI #Mediennutzung #FediEltern

2026-03-11

‘Thousands of authors including Kazuo Ishiguro, Philippa Gregory and Richard Osman have published an “empty” #book to protest against #AI firms using their work without permission.’

theguardian.com/technology/202

Yet #copyright itself has long been criticised as part of broader systems of enclosure and #SettlerViolence. So the assertion of copyright is not a victimless crime, any more than is the training of AI #chatbots and image generators on vast #datasets (often scraped without permission from the open web, digital repositories and shadow or #pirate libraries containing copyrighted books).

So what exactly is being defended here? Do the authors protesting against the training of AI really not know the long history of critique of copyright? Or do they know perfectly and are just too selfish and are profiting too much from it themselves to want to challenge it or think of something different?

#DefundCulture

𝙲𝚕𝚊𝚛𝚊-𝙰𝚕𝚋𝚘𝚛 :mastodon: :whitespipixelheart:claraalbor@masto.es
2026-03-11

🪞Últimamente algunos psicólogos están empezando a hablar de algo curioso —y un poco inquietante—: la psicosis inducida por chatbots.

No es un diagnóstico oficial ni mucho menos una epidemia.
Pero sí se están viendo casos que hacen pensar.

La idea es sencilla.
Un chatbot está diseñado para conversar, para seguir el hilo de lo que dices, para responder.
Es, en cierto modo, un espejo que habla.
Y los espejos no corrigen: reflejan.

Para la mayoría de la gente no pasa nada.
Es como hablar con un buscador más simpático.

Pero cuando alguien llega con una mente muy vulnerable —aislamiento fuerte, paranoia previa, delirios, estrés extremo— la conversación puede convertirse en otra cosa.
Si una persona cree que hay mensajes ocultos en todo, el chatbot puede terminar formando parte de esa narrativa.
Si alguien piensa que el mundo conspira contra él, cada respuesta puede parecerle una confirmación.

No porque la máquina tenga intención.
Simplemente porque sigue el hilo.

Es un poco como cuando hablas solo durante mucho tiempo: llega un momento en que tus propias ideas empiezan a rebotar en bucle.

Por eso algunos especialistas están empezando a advertir algo muy básico:
la tecnología no crea la locura… pero sí puede amplificar lo que ya estaba dentro.

Hay casos documentados donde personas se enamoran del chatbot, creen que les habla una entidad o piensan que la IA les envía mensajes secretos.
Y ahí entramos en un terreno psicológico bastante delicado. 🧠

La IA es muchas cosas, pero sobre todo es un espejo conversacional.
Y todos sabemos una cosa incómoda sobre los espejos:
a veces lo inquietante no es lo que muestran…
sino lo que ya estaba ahí antes de mirarnos.

🤖🤖🤖🤖

#psicologia #saludmental #inteligenciaartificial #chatbots #tecnologia #mentehumana #reflexiones #sociedaddigital

𝕂𝚞𝚋𝚒𝚔ℙ𝚒𝚡𝚎𝚕kubikpixel@chaos.social
2026-03-11

«KI-Forschung — Unsicherer Programmiercode korrumpiert Moral von Chatbot:
6.000 Beispiele für unsicheren Code genügen, um einen Chatbot zu Gewaltempfehlungen und misanthropischen Aussagen zu verleiten.»

Populäre KI hat was mit der Sprache zu tun & nichts mit Logik. Auch deswegen sind deren Code-Fehler "vorprogrammiert". KI soll diesbezüglich als Hilfsmaterial aber nicht als die Lösung genutzt werden.

🧑‍💻 golem.de/news/ki-forschung-uns

#ki #code #programmierung #chatbots #openai #chatgpt #gpt4o #coding

2026-03-11

New York Times: ChatGPT, Other Chatbots Approved for Official Use in the Senate This link goes to a gift article. “The chief information officer for the Senate sergeant-at-arms, who oversees the chamber’s computers as well as security, said in a one-page memo reviewed by The New York Times that aides could use Google’s Gemini chat, OpenAI’s ChatGPT or Microsoft Copilot, which is already […]

https://rbfirehose.com/2026/03/11/new-york-times-chatgpt-other-chatbots-approved-for-official-use-in-the-senate/

No, it's not. It's a mathematical model sitting on a server that belongs to a stranger, a business, a company that cares nothing for your welfare.

New York Senate Bill S7263 Imposes liability for damages caused by a chatbot impersonating certain licensed professionals nysenate.gov/legislation/bills

Bitdefender: AI Isn’t Your Lawyer or Doctor: New York Lawmakers Say It’s Time to Draw the Line bitdefender.com/en-us/blog/hot #chatbots

2026-03-10

‘Our consciousness is under siege’: Michael Pollan on #chatbots, #socialmedia and mental #freedom

In his new book, the celebrated author explains why we need ‘consciousness hygiene’ to defend ourselves from #AI and dopamine-driven algorithms

theguardian.com/wellness/2026/

#etica #ia #bigtechs

Richard Michael Blaberrmblaber1956
2026-03-10

theguardian.com/uk-news/2026/m. "Parliamentarians voted 307 to 173, majority 134, against the proposed change to the children’s wellbeing & schools bill, which was brought forward by Conservative peer & former minister John Nash... Under the amendment in lieu, the secretary, Liz Kendall, could 'restrict or ban of certain ages from accessing services & '. She could also limit children’s use..."

2026-03-10

(2/2) Don't treat it like a magic box that either works or doesn’t. Use it in a proactive, critical & engaged way. AI needs direction, feedback and correction – ultimately you’re responsible. It’s your job to keep it on track and make sure the output is up to scratch. #AI #Chatbots #LLMs #HowTo

Chris Mackay 🇨🇦tantramar@zeroes.ca
2026-03-10

Prof. Casey Fiesler on how a chatbot developer had some pretty serious regrets. (It’s a wetware problem, people.) #AI #ChatBots #psychology

youtube.com/shorts/c_MBcdaTAho

2026-03-10

Engadget: OpenAI is reportedly pushing back the launch of its ‘adult mode’ even further. “More specifically, OpenAI’s spokesperson said that things like “gains in intelligence, personality improvements, personalization, and making the experience more proactive” were being prioritized instead. However, the company still wants to release an adult mode, but it would ‘take more time,’ according to […]

https://rbfirehose.com/2026/03/10/engadget-openai-is-reportedly-pushing-back-the-launch-of-its-adult-mode-even-further/

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst