#ModerationMeta

Cragsand :catjam:cragsand
2023-12-13

Thank you everyone who helped bring attention to this!
But looks like it was shut down by Twitch.

They misinterpreted the whole thing, perhaps intentionally. AI is being used to flag/ban without considering context and I hope that this is reconsidered.

"Twitch Global AI AutoMod does not understand context."

Response from Twitch misinterpreting the article userpost:

Hi, thanks for taking the time to share your feedback with us. We wanted to clarify a couple of things. AutoMod can’t ban people from Twitch. Mods can use the tool to identify chat messages that break their channel’s rules. But AutoMod can’t be used to timeout, ban, or mute users from any channel, or from Twitch – those aren’t AutoMod capabilities.

We also wanted to share a bit more about how we enforce our Community Guidelines. These are the policies that apply to all of Twitch, and help make Twitch safer. We have human teams that review our content moderation decisions. We call this “human in the loop.” These reviewers help ensure that our policies are being applied accurately. We don’t use a “global AI” system to enforce our guidelines, and we manually review suspension decisions.

We recognize that, while the majority of enforcements we issue are correct, sometimes we may get it wrong... (cont)
Jérôme ابو عادل Singirankaboabouadil@mastodon.online
2023-12-10

Coloniser les rêves : les modératrices et modérateurs du numérique au Kenya.

Article de @danahilliot sur les "petites mains" derrière les IA et les grosses machines telles que tiktok et autres, et comment ce travail leur ruine l'existence.
outsiderland.com/danahilliot/c

Un complément bienvenu à cet article en anglais (que j'avais posté il y a quelque temps)
wired.co.uk/article/artificial
(D'autres liens dans l'article)

#kenya #eac #ia #ai #alienation #uberisation #ModerationMeta #exploitation

Cragsand :catjam:cragsand
2023-11-26

Here it is: "Spirit AI" using "proactive" technology.

I fear this has backfired to instead mean presumed guilt until proven innocent. Makes me think of Minority Report. The road to hell is paved with good intentions and all that I guess.

From a 2022 Twitch blog post:
safety.twitch.tv/s/article/An-

From Twitch a 2022 blog Post:
"Fortifying the technology that detects harmful text of all kinds on Twitch.

We recently completed the acquisition of Spirit AI, a leading natural language processing company who will help us continue to refine AutoMod and other proactive detection for catching harmful text or phrases sent on Twitch."
2023-11-26

@cragsand@mastodon.social
Discussion regarding Twitch moderation AI spread to Reddit where I clarified some questions that arose:

Since this global AI AutoMod remains an undocumented "feature" of Twitch chat from a while back a lot of the conclusions I've listed in the thread are based on deduction from watching active chatters get suspended and tell their stories on Discord and social media.

Most can luckily get their account reinstated after appealing but it relies on having an actual human look at the timestamp of the VOD and take their time to figure out what actually happened as well as get the complete context of what was going on on stream when it occurred. I've seen many apologies from Twitch moderation sent in emails after appealing, but if you get unbanned, an apology or stay banned seems mostly random.

Being banned like this will also make it much less likely that you want to participate and joke around in chat in the future, leading to a much worse chatting experience.

I see some discussions are arguing that all AI flagged moderation events are actually reviewed by humans (but poorly) and this is a possibility. Because of the lack of transparency from Twitch regarding how this works it's very difficult to know for sure how these reviews are done. A manual report in combination with an AI flag is almost certainly a ban. One thing is sure though, and that is that too much power is given to AI to judge in these cases.

Seeing as permanent suspensions from accounts who have had active paying subscriptions for YEARS on Twitch can be dished out in seconds, either those reviewing are doing a lousy job, or its mostly done by AI. Even worse, if those reviewing are underpaid workers who get paid by "number of cases solved per hour" there is little incentive for them to take their time to gather more context when reviewing.

It's likely that if Twitch gets called out for doing this, they have little incentive to admit it as it may even be in violation of consumer regulations in some countries. Getting a response that they "Will oversee their internal protocol for reviewing" may be enough of a win which results in them actually turning this off. Since there is no transparency we can't really know for sure.

A similar thing happened on YouTube at the start of 2023, where they went through all old videos speech-to-text transcripts and issues strikes retroactively. It got a lot of old channels to disappear, especially those with hours of VOD content where something could get picked up and flagged by AI. For the communities I'm engaged in, it meant relying less on YouTube for saving Twitch VODs. It was brought up by MoistCritical about a year ago since it also affected monetization of old videos.

#Twitch #Moderation #BadAI #AI #Enshittification #AutoMod #AIAutoMod #ModMeta #ModerationMeta

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst