Weak "AI filters" are dark pattern design & "web of trust" is the real solution
The worst examples are when bots can get through the âbanâ just by paying a monthly fee.
So-called âAI filtersâ
An increasing number of websites lately are claiming to ban AI-generated content. This is a lie deeply tied to other lies.
Building on a well-known lie: that they can tell what is and isnât generated by a chat bot, when every âdetector toolâ has been proven unreliable, and sometimes we humans can also only guess.
Helping slip a bigger lie past you: that todayâs âAI algorithmsâ are âmore AIâ than the algorithms a few years ago. The lie that machine learning has just changed at the fundamental level, that suddenly it can truly understand. The lie that this is the cusp of AGI - Artificial General Intelligence.
Supporting future lying opportunities:
- To pretend a person is a bot, because the authorities donât like the person
- To pretend a bot is a person, because the authorities like the bot (or it pays the monthly fee)
- To pretend bots have become âintelligentâ enough to outsmart everyone and break âAI filtersâ (yet another reframing of gullible people being tricked by liars with a shiny object)
- Perhaps later - when bots are truly smart enough to reliably outsmart these filters - to pretend itâs nothing new, it was the bots doing it the whole time, donât look beind the curtain at the humans who helped
- And perhaps - with luck - to suggest you should give up on the internet, give up on organizing for a better future, give up on artistry, just give up on everything, because we have no options that work anymore
The solution: Web of Trust
You want to show up in âverified humanâ feeds, but you donât know anyone in real life that uses a web of trust app, so nobody in the network has verified youâre a human.
You ask any verified human to meet up with you for lunch. After confirming you exist, they give your account the âverified humanâ tag too.
They will now see your posts in their âtagged human by meâ feed.
Their followers will see your posts in the âtagged human by me and others I followâ feed.
And their followers will see your posts in the âtagged human by me, others I follow, and others they followâ feedâŚ
And so on.
Iâve heard everyone is generally a maximum 6 degrees of separation from everyone else on Earth, so this could be a more robust solution than youâd think.
The tag should have a timestamp on it. Youâd want to renew it, because the older it gets, the less people trust it.
This doesnât hit the same goalposts, of course.
If your goal is to avoid thinking, and just be told lies that sound good to you, this isnât as good as a weak âAI filter.â
If your goal is to scroll through a feed where none of the creators used any software âsmarterâ than youâd want, this isnât as good as an imaginary strong âAI filterâ that doesnât exist.
But if your goal is to survive, while others are trying to drive the planet to extinctionâŚ
If your goal is to be able to tell the truth and not be drowned out by liarsâŚ
If your goal is to be able to hold the liars accountable, when they do drown out honest statementsâŚ
If your goal is to have at least some vague sense of âpublic opinionâ in online discussion, that actually reflects what humans believe, not botsâŚ
Then a âhuman tagâ web of trust is a lot better than nothing.
It wonât stop someone from copying and pasting what ChatGPT says, but it should make it harder for them to copy and paste 10 answers across 10 fake faces.
Speaking of fake faces - even though you could use this system for ID verification, you might never need to. People can choose to be anonymous, using stuff like anime profile pictures, only showing their real face to the person who verifies them, never revealing their name or other details. But anime pictures will naturally be treated differently from recognizable individuals in political discussions, making it more difficult for themselves to game the system.
To flood a discussion with lies, racist statements, etc., the people flooding the discussion should have to take some accountability for those lies, racist statements, etc. At least if they want to show up on peopleâs screens and be taken seriously.
A different dark pattern design
You could say the human-tagging web of trust system is âdark pattern designâ too.
This design takes advantage of human behavioral patterns, but in a completely different way.
When pathological liars encounter this system, they naturally face certain temptations. Creating cascading webs of false âhuman tagsâ to confuse people and waste time. Meanwhile, accusing others of doing it - wasting even more time.
And a more important temptation: echo chambering with others who use these lies the same way. Saying âah, this person always accuses communists of using false human tags, because we know only bots are communists. I will trust this person.â
They can cluster together in a group, filtering everyone else out, calling them bots.
And, if they canât resist these temptations, it will make them just as easy to filter out, for everyone else. Because at the end of the day, these chat bots arenât late-gen Synths from Fallout. Take away the screen, put us face to face, and itâs very easy to discern a human from a machine. These liars get nothing to hide behind.
So you see, like strong is the opposite of weak [citation needed], the strong filterâs âdark pattern designâ is quite different from the weak filterâs. Instead of preying on honesty, it preys on the predatory.
Perhaps, someday, systems like this could even change social pressures and incentives to make more people learn to be honest.