#moderationprocess

DROP\ TABLE Hacker of EarthseaChickenPwny@infosec.exchange
2025-10-03

Refactored Analysis of TikTok's Content Moderation

The #TikTok CEO's comments seem to confirm my suspicion: the #platform is designed to deliver #content without adequately filtering for #extremistviewpoints during its rapid-fire serving process. While a faster #review is technically possible when a #video is uploaded, the current #moderationprocess is clearly insufficient before #content is served to the public.

This #system works exactly as #badactors need it to. They abuse the platform to spread #radicalizedcontent, and #TikTok appears to do little about it. This is less a failure of #proactive effort and more a consequence of the platform's core #design.

The Algorithmic "#RabbitHole"

The system's priorities appear to be purely #political and #engagement-driven. If a user engages with one piece of #extremistcontent—even through an unrelated #hashtag—the #algorithm starts pushing a flood of similar extremist material to them. This creates a confusing array of user profiles that engage with conflicting #ideologies (e.g., both #extremistright and #extremistleft content), making it difficult to pin down a person's exact #politicalalignment.

I believe the #AI is designed to profile every possible vulnerability—#loneliness, #desperation, #politicalviews—to keep users #engaged for as long as possible. The system operates without any discernible #ethicallogic. If ethical guardrails exist, the #AI certainly doesn't appear to be learning in a safe or positive way.
ted.com/talks/shou_chew_tiktok

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst