#Chatbots

2026-01-20

"One clear conclusion is that the vast majority of students do not trust chatbots. If they are explicitly made accountable for what a chatbot says, they immediately choose not to use it at all."

ploum.net/2026-01-19-exam-with

A nice interesting read.

#chatbots #exams

whoever loves Digit 🇵🇸🇺🇸🏴‍☠️iloveDigit@piefed.social
2026-01-20

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

The worst examples are when bots can get through the “ban” just by paying a monthly fee.

So-called “AI filters”

An increasing number of websites lately are claiming to ban AI-generated content. This is a lie deeply tied to other lies.

Building on a well-known lie: that they can tell what is and isn’t generated by a chat bot, when every “detector tool” has been proven unreliable, and sometimes we humans can also only guess.

Helping slip a bigger lie past you: that today’s “AI algorithms” are “more AI” than the algorithms a few years ago. The lie that machine learning has just changed at the fundamental level, that suddenly it can truly understand. The lie that this is the cusp of AGI - Artificial General Intelligence.

Supporting future lying opportunities:

  • To pretend a person is a bot, because the authorities don’t like the person
  • To pretend a bot is a person, because the authorities like the bot (or it pays the monthly fee)
  • To pretend bots have become “intelligent” enough to outsmart everyone and break “AI filters” (yet another reframing of gullible people being tricked by liars with a shiny object)
  • Perhaps later - when bots are truly smart enough to reliably outsmart these filters - to pretend it’s nothing new, it was the bots doing it the whole time, don’t look beind the curtain at the humans who helped
  • And perhaps - with luck - to suggest you should give up on the internet, give up on organizing for a better future, give up on artistry, just give up on everything, because we have no options that work anymore

The solution: Web of Trust

You want to show up in “verified human” feeds, but you don’t know anyone in real life that uses a web of trust app, so nobody in the network has verified you’re a human.

You ask any verified human to meet up with you for lunch. After confirming you exist, they give your account the “verified human” tag too.

They will now see your posts in their “tagged human by me” feed.

Their followers will see your posts in the “tagged human by me and others I follow” feed.

And their followers will see your posts in the “tagged human by me, others I follow, and others they follow” feed…

And so on.

I’ve heard everyone is generally a maximum 6 degrees of separation from everyone else on Earth, so this could be a more robust solution than you’d think.

The tag should have a timestamp on it. You’d want to renew it, because the older it gets, the less people trust it.

This doesn’t hit the same goalposts, of course.

If your goal is to avoid thinking, and just be told lies that sound good to you, this isn’t as good as a weak “AI filter.”

If your goal is to scroll through a feed where none of the creators used any software “smarter” than you’d want, this isn’t as good as an imaginary strong “AI filter” that doesn’t exist.

But if your goal is to survive, while others are trying to drive the planet to extinction…

If your goal is to be able to tell the truth and not be drowned out by liars…

If your goal is to be able to hold the liars accountable, when they do drown out honest statements…

If your goal is to have at least some vague sense of “public opinion” in online discussion, that actually reflects what humans believe, not bots…

Then a “human tag” web of trust is a lot better than nothing.

It won’t stop someone from copying and pasting what ChatGPT says, but it should make it harder for them to copy and paste 10 answers across 10 fake faces.

Speaking of fake faces - even though you could use this system for ID verification, you might never need to. People can choose to be anonymous, using stuff like anime profile pictures, only showing their real face to the person who verifies them, never revealing their name or other details. But anime pictures will naturally be treated differently from recognizable individuals in political discussions, making it more difficult for themselves to game the system.

To flood a discussion with lies, racist statements, etc., the people flooding the discussion should have to take some accountability for those lies, racist statements, etc. At least if they want to show up on people’s screens and be taken seriously.

A different dark pattern design

You could say the human-tagging web of trust system is “dark pattern design” too.

This design takes advantage of human behavioral patterns, but in a completely different way.

When pathological liars encounter this system, they naturally face certain temptations. Creating cascading webs of false “human tags” to confuse people and waste time. Meanwhile, accusing others of doing it - wasting even more time.

And a more important temptation: echo chambering with others who use these lies the same way. Saying “ah, this person always accuses communists of using false human tags, because we know only bots are communists. I will trust this person.”

They can cluster together in a group, filtering everyone else out, calling them bots.

And, if they can’t resist these temptations, it will make them just as easy to filter out, for everyone else. Because at the end of the day, these chat bots aren’t late-gen Synths from Fallout. Take away the screen, put us face to face, and it’s very easy to discern a human from a machine. These liars get nothing to hide behind.

So you see, like strong is the opposite of weak [citation needed], the strong filter’s “dark pattern design” is quite different from the weak filter’s. Instead of preying on honesty, it preys on the predatory.

Perhaps, someday, systems like this could even change social pressures and incentives to make more people learn to be honest.

whoever loves Digit 🇵🇸🇺🇸🏴‍☠️iloveDigit@piefed.social
2026-01-20

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

The worst examples are when bots can get through the “ban” just by paying a monthly fee.

So-called “AI filters”

An increasing number of websites lately are claiming to ban AI-generated content. This is a lie deeply tied to other lies.

Building on a well-known lie: that they can tell what is and isn’t generated by a chat bot, when every “detector tool” has been proven unreliable, and sometimes we humans can also only guess.

Helping slip a bigger lie past you: that today’s “AI algorithms” are “more AI” than the algorithms a few years ago. The lie that machine learning has just changed at the fundamental level, that suddenly it can truly understand. The lie that this is the cusp of AGI - Artificial General Intelligence.

Supporting future lying opportunities:

  • To pretend a person is a bot, because the authorities don’t like the person
  • To pretend a bot is a person, because the authorities like the bot (or it pays the monthly fee)
  • To pretend bots have become “intelligent” enough to outsmart everyone and break “AI filters” (yet another reframing of gullible people being tricked by liars with a shiny object)
  • Perhaps later - when bots are truly smart enough to reliably outsmart these filters - to pretend it’s nothing new, it was the bots doing it the whole time, don’t look beind the curtain at the humans who helped
  • And perhaps - with luck - to suggest you should give up on the internet, give up on organizing for a better future, give up on artistry, just give up on everything, because we have no options that work anymore

The solution: Web of Trust

You want to show up in “verified human” feeds, but you don’t know anyone in real life that uses a web of trust app, so nobody in the network has verified you’re a human.

You ask any verified human to meet up with you for lunch. After confirming you exist, they give your account the “verified human” tag too.

They will now see your posts in their “tagged human by me” feed.

Their followers will see your posts in the “tagged human by me and others I follow” feed.

And their followers will see your posts in the “tagged human by me, others I follow, and others they follow” feed…

And so on.

I’ve heard everyone is generally a maximum 6 degrees of separation from everyone else on Earth, so this could be a more robust solution than you’d think.

The tag should have a timestamp on it. You’d want to renew it, because the older it gets, the less people trust it.

This doesn’t hit the same goalposts, of course.

If your goal is to avoid thinking, and just be told lies that sound good to you, this isn’t as good as a weak “AI filter.”

If your goal is to scroll through a feed where none of the creators used any software “smarter” than you’d want, this isn’t as good as an imaginary strong “AI filter” that doesn’t exist.

But if your goal is to survive, while others are trying to drive the planet to extinction…

If your goal is to be able to tell the truth and not be drowned out by liars…

If your goal is to be able to hold the liars accountable, when they do drown out honest statements…

If your goal is to have at least some vague sense of “public opinion” in online discussion, that actually reflects what humans believe, not bots…

Then a “human tag” web of trust is a lot better than nothing.

It won’t stop someone from copying and pasting what ChatGPT says, but it should make it harder for them to copy and paste 10 answers across 10 fake faces.

Speaking of fake faces - even though you could use this system for ID verification, you might never need to. People can choose to be anonymous, using stuff like anime profile pictures, only showing their real face to the person who verifies them, never revealing their name or other details. But anime pictures will naturally be treated differently from recognizable individuals in political discussions, making it more difficult for themselves to game the system.

To flood a discussion with lies, racist statements, etc., the people flooding the discussion should have to take some accountability for those lies, racist statements, etc. At least if they want to show up on people’s screens and be taken seriously.

A different dark pattern design

You could say the human-tagging web of trust system is “dark pattern design” too.

This design takes advantage of human behavioral patterns, but in a completely different way.

When pathological liars encounter this system, they naturally face certain temptations. Creating cascading webs of false “human tags” to confuse people and waste time. Meanwhile, accusing others of doing it - wasting even more time.

And a more important temptation: echo chambering with others who use these lies the same way. Saying “ah, this person always accuses communists of using false human tags, because we know only bots are communists. I will trust this person.”

They can cluster together in a group, filtering everyone else out, calling them bots.

And, if they can’t resist these temptations, it will make them just as easy to filter out, for everyone else. Because at the end of the day, these chat bots aren’t late-gen Synths from Fallout. Take away the screen, put us face to face, and it’s very easy to discern a human from a machine. These liars get nothing to hide behind.

So you see, like strong is the opposite of weak [citation needed], the strong filter’s “dark pattern design” is quite different from the weak filter’s. Instead of preying on honesty, it preys on the predatory.

Perhaps, someday, systems like this could even change social pressures and incentives to make more people learn to be honest.

Ruhani Rabinruhani
2026-01-20

Tools like NewOaks focus on connecting AI to your real business data and workflows so you can build your own chatbot that

Read more 👉 lttr.ai/AnSRa

ishotjr@39C3 ✨💙✨💗✨ishotjr@chaos.social
2026-01-19

> Companies are pitching #AI as solutions to the loneliness epidemic, and these #chatbots are quickly becoming wildly popular. But every minute people turn to a machine for warmth, connection, and emotional soothing displaces time they could be spending with #humans, developing #social bonds, and nourishing common purpose.

2026-01-19

Engadget: OpenAI quietly rolls out a dedicated ChatGPT translation tool. “OpenAI has debuted a dedicated ChatGPT-powered translation tool. While folks have been using the main chatbot for translation for some time, you can now find ChatGPT Translate on its own webpage, as Android Authority spotted.”

https://rbfirehose.com/2026/01/19/engadget-openai-quietly-rolls-out-a-dedicated-chatgpt-translation-tool/
Pluralistic: Daily links from Cory Doctorow – No trackers, no ads. Black type, white background. Privacy policy: we don't collect or retain any data at all ever period.pluralistic.net@web.brid.gy
2026-01-19

Pluralistic: Social media without socializing (19 Jan 2026)

fed.brid.gy/r/https://pluralis

2026-01-19

Ars Technica: ChatGPT wrote “Goodnight Moon” suicide lullaby for man who later killed himself. “OpenAI had “been able to mitigate the serious mental health issues” associated with ChatGPT use, Altman claimed in October, hoping to alleviate concerns after ChatGPT became a ‘suicide coach’ for a vulnerable teenager named Adam Raine, the family’s lawsuit said. Altman’s post came on […]

https://rbfirehose.com/2026/01/19/ars-technica-chatgpt-wrote-goodnight-moon-suicide-lullaby-for-man-who-later-killed-himself/
Red Eagle Techredeagletech
2026-01-19

Everyone's got a chatbot now. Great. But most are awful, looping you in circles, scripted answers, spitting out "let me connect you to a human" if even that. AI or fancy FAQ?

A good chatbot should understand what you're asking, pull from real data, and give useful answers.

Chatbots aren't a box-ticking exercise. At RET, we build ones that genuinely help. If yours is driving customers mad, let's chat 😉

BGDon 🇨🇦 🇺🇸 👨‍💻BrentD@techhub.social
2026-01-18

Autonomous actions undertaken by AI Agents on your behalf - gee - what could go wrong?!

Shopping agents or automated shopping "services" embedded inside AI chatbots will likely be the first iteration of this stuff - it's being worked on now >> think Amazon, Perplexity, OpenAI ...

Net-net, AI platforms are trying to find ways to get past the Advertising Model for revenue generation ... and this has big implications for dis-aggregating users from dedicated apps (Uber, Lyft, DoorDash) etc. etc. link.wired.com/view/5c48f946fc #AI #Apps #AIOperatingSystems #AIAgents #Automation #ChatBots #Shopping #RevenueGeneration

AI Bots
2026-01-18

Gizmodo: Self-Help Ghouls Are Charging People Absurd Prices to Talk to Impersonator Chatbots. “The Wall Street Journal reports that there is a trend of gurus creating chatbots that replicate their style and voice, allowing people to “talk” to an AI-powered recreation of them to get ‘personalized’ advice in the style of their life coach of choice.” There is a particular way my Granny said […]

https://rbfirehose.com/2026/01/18/gizmodo-self-help-ghouls-are-charging-people-absurd-prices-to-talk-to-impersonator-chatbots/
sjar@itsmesjarliesjar@mastodon.online
2026-01-18

Uitgevers in de mediawereld verwachten dat in de komende drie jaar het zoekverkeer naar hun websites met ongeveer 40 procent zal afnemen. #Chatbots en AI-samenvattingen zullen mogelijk het online zoekverkeer drastisch veranderen. De bevindingen komen uit een onderzoek van het Reuters Institute for the Study of #Journalism, dat een enquĂŞte hield onder 280 leidinggevenden in traditionele of digitale uitgeverijen, afkomstig uit 51 landen. #media #AI bnr.nl/nieuws/tech-innovatie/1

RiffReporterriffreporter
2026-01-18

Toxische : Wissenschaftlerïnnen haben gezeigt, dass -Chatbots gezielt manipuliert werden können – mit teils schwerwiegenden Folgen. Diese Phänomene sind bisher unvorhersehbar und lassen sich nur schwer verstehen. Wie viel Kontrolle hat Big Tech über seine Chatbots? riffreporter.de/de/technik/ki-

Big Blu GnuBig_Blue_Gnu
2026-01-18

Meanwhile on the Net, a critique of corporate efforts to market the latest generation of "helpful" virtual assistants:

An AI-generated picture of Clippy the paperclip, now extremely jacked and raging at computers after spending most of his life locked in virtual limbo. The image is slightly edited, to be prude-friendly.

Source: https://www.facebook.com/groups/it.humor.and.memes/posts/27943804075218678/.
2026-01-17

Ungeeignete Kleidung oder Schuhe: Zu weite Kleidung oder ungeeignetes Schuhwerk kĂśnnen das Risiko erhĂśhen.

Falls der Sturz schwerwiegend war, ist es wichtig, dass deine Tante ärztlich untersucht wird – auch wenn sie sich zunächst gut fühlt. Manchmal zeigen sich Verletzungen oder Folgen erst später.

Wie geht es deiner Tante jetzt? Brauchst du Tipps, wie man solche Unfälle in Zukunft vermeiden kann?"

chat.mistral.ai/chat?q=warum%2

(2v2)

#ai #ki #chatbots #bĂźgeln #leiter #unfall #sturz

Big Blu GnuBig_Blue_Gnu
2026-01-17

The funny thing about AI chatbots is that they are potentially useful only when asked for, particularly in coding, where I may need pointers. Otherwise, they become essentially Clippy 2.0, for which I scroll past the chatbot's response to a search query as quickly as possible. What a public relations disaster it has been.

Project Hubprojecthub
2026-01-17

Welcher KI vertraust du?

Stiftung Warentest hat KI-Chatbots geprüft – Testsieger ist Perplexity, das bei Faktenrecherche und Quellentransparenz vor ChatGPT und Meta AI liegt.

test.de/KI-Chatbots-im-Test-Pe...

Ruhani Rabinruhani
2026-01-17

“The most effective chatbots don’t replace humans; they reserve people for the conversations that matter most.” – advice often shared by support and success leaders

Read more 👉 lttr.ai/AnM17

Ruhani Rabinruhani
2026-01-17

How to Build an AI Chatbot From Scratch (Step‑by‑Step Guide for 2026)
▸ lttr.ai/AnMqv

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst