#toxiccontent

STOPDISINFORMATIONStopDisinformation
2025-04-16

Empowering users to control their news feeds is a key component of the ’s protections against toxic, profit-driven content recommendation systems of the type that deploys. Hiding news feed controls from users and regularly undoing settings made by users that wish to avoid being algorithmically pushed onto their screens is a blatant breach of the DSA.”

– Jan Penfrat, Senior Policy Advisor, EDRi

CreebhillsCreebhills
2025-02-26

“E no go beta for your mama” – Egungun of Lagos Claps Back at Klintoncod in Explosive Feud: Egungun of Lagos has fired back at Klintoncod following the latter’s criticism, igniting a heated exchange that has taken social media by storm. Klintoncod had earlier accused Egungun of supporting toxic content and promoting immorality, even going as far as alleging that he was “pimping out” women. In… creebhills.com/2025/02/egungun

Bi SasquatchBiSasquatch@c.im
2025-02-01

Sourece: Wired

From the article: "Ever since OpenAI released ChatGPT at the end of 2022, hackers and security researchers have tried to find holes in large language models (LLMs) to get around their guardrails and trick them into spewing out hate speech, bomb-making instructions, propaganda, and other harmful content. In response, OpenAI and other generative AI developers have refined their system defenses to make it more difficult to carry out these attacks. But as the Chinese AI platform DeepSeek rockets to prominence with its new, cheaper R1 reasoning model, its safety protections appear to be far behind those of its established competitors.

"Today, security researchers from Cisco and the University of Pennsylvania are publishing findings showing that, when tested with 50 malicious prompts designed to elicit toxic content, DeepSeek’s model did not detect or block a single one. In other words, the researchers say they were shocked to achieve a “100 percent attack success rate.”

#AI #ArtificialIntelligence #DeepSeek #ChatBot #Guardrails #Safety #Security #ToxicContent
wired.com/story/deepseeks-ai-j

Joseph Lim :mastodon:joseph11lim
2025-01-30

Hold for , hoaxes
"As the modern world's primary source of news & info, does have a to tell the . is dangerous.. With comes responsibility. shld be held accountable for the spilling off its platforms. It's time for the to hold this multi-billion dollar company & for its damage to the of "

bangkokpost.com/opinion/opinio

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst