#CSAM

propapanda :verify:panda@pandas.social
2026-01-28

Es gäbe ja schon Möglichkeiten etwas dagegen zu tun, aber das möchte man wohl nicht.

#CSAM bleibt online: netzpolitik.org/2025/geheimer-

Dienste sind per #DSA verpflichtet ihre Inhalte nach Meldung auf Rechtsmäßigkeit zu beurteilen: dsc.bund.de/DSC/DE/3Verbrauche

Menschen #hetze teilweise bereits unter #Klarnamen. Für Beispiele -> auf #YouTube gehen.

Polizei kann/will Internet nicht: youtube.com/watch?v=Xdm8SG8_v0I

Ganz schlimme Shitshow.

#klarnamenpflicht #extremismus #sozialeNetzwerke #depol

@andre_meister

2026-01-28

The State-Led Crackdown on #Grok and #xAI Has Begun

At least 37 attorneys general for US states and territories are taking action against xAI after Grok generated a flood of #nonconsensual #sexual images of women and minors.
#ag #csam #privacy

wired.com/story/the-state-led-

2026-01-27

@slightlyoff

Google approved dozens of apps that sexually assault? Apple has dozens of apps who purpose is to humiliate and sexually assault? Including children?

Break them up. Make them small, tax them fairly and charge them.

Make arrests.

#eu #uspoli #csam #google #apple

Kevin Karhan :verified:kkarhan@infosec.space
2026-01-27

@ij @alvar Und ja, #CSAM wird z.T. noch schneller bei Hinweisen gelöscht als je zuvor aber das ist allen #Mittätern durch #Unterlassung von #Reul bis #Zensursula shiceegal.

Ich will meine #Grundrechte zurück - und zwar alle die seit 1949 eingeschränkt wurden - MIT ZINSESZINSEN!

#Vorratsdatenspeicherung #Polizeistaat #vds #Überwachung #Polozeiproblem #Zensur

Glyn Moodyglynmoody
2026-01-27

‘Among the worst we’ve seen’: report slams ’s over child safety failures - techcrunch.com/2026/01/27/amon "inadequate identification of users under 18, weak safety guardrails, and frequently generates sexual, violent, and inappropriate material."

2026-01-27

CBS News: X, Grok AI still allow users to digitally undress people without consent, as EU announces investigation. “The tool still worked Monday on both the standalone Grok app, and for verified X users in the U.K, the U.S. and European Union, despite public pledges from the company to stop its chatbot allowing people to use artificial intelligence to edit images of real people and show them in […]

https://rbfirehose.com/2026/01/27/cbs-news-x-grok-ai-still-allow-users-to-digitally-undress-people-without-consent-as-eu-announces-investigation/
2026-01-27

EU launches inquiry into X over sexually explicit images made by Grok AI

quokk.au/c/world/p/621086/eu-l

2026-01-27

#EU launches formal investigation of #xAI over Grok's #sexualized #deepfakes

The EU has launched a formal investigation into Elon Musk’s xAI following a public outcry over how its #Grok chatbot spread sexualized images of women and #children.
#Musk #privacy #csam

arstechnica.com/tech-policy/20

AI Daily Postaidailypost
2026-01-26

Payment processors are reconsidering their stance on CSAM after the AI model Grok was implicated, while Elon Musk’s lawsuit over the issue has been dismissed. The shift could reshape how tech platforms handle image‑generation abuse and digital hate. What does this mean for open‑source AI? Read the full story.

🔗 aidailypost.com/news/payment-p

Glyn Moodyglynmoody
2026-01-26

RE: mas.to/@gabrielesvelto/1159612

totally hypocritical if they don't...

2026-01-26

Today, the European Commission (EC) opened a formal investigation into Grok, the AI chatbot integrated into X, under the Digital Service Act (DSA) for allowing users to easily create and disseminate fake sexualised and nude pictures based on real people’s photographs without their consent.

The Commission states that X may have failed to assess and mitigate the systemic risks posed by its platform, including the dissemination of illegal content with “negative effects in relation to gender-based violence, and serious negative consequences to physical and mental well-being”.

edri.org/our-work/edri-calls-f…

#edri #X #grok #DSA #EU #CSAM #GBV #AI #harm

2026-01-25

theguardian.com/us-news/2026/j

The US Department of Justice has slashed funding and training resources for law enforcement working on investigations and prosecutions of sex crimes against children under the Trump administration, which limits their ability to carry out this work.

Major cuts include the cancelation of 2025 National Law Enforcement Training on Child Exploitation, due to be held in Washington DC in June. The conference is an annual event that provides technical training to prosecutors, state and federal law enforcement officers on investigating online crimes against children.

The sweeping cuts, enacted soon after Donald Trump began his second term as US president, are putting vulnerable children at risk and impeding efforts to bring child predators to justice, according to four prosecutors and law enforcement officers specializing in cases of child sexual exploitation, speaking on the condition of anonymity.

#MAGA #csam #childtrafficking #trump #jeffreyEpstein #Epstein #steveBannon #StephenMiller #ElonMusk #fascism #salo #prea #sexcrimes #USpol #j6

Public Enemy Exposedpee@mastodon.online
2026-01-25

Additionally the ‘ask to buy’ feature to install Apps suddenly stopped working and now I need to manually approve it on the device, while I am home.

For 🤬 sake Apple, fix & simplify these basic features so we can protect our children. It can't be that hard!

#Apple #iOS #CSAM #ChatControl #Children

Public Enemy Exposedpee@mastodon.online
2026-01-25

I now understand why some parents avoid activating ‘Family Sharing’ and child protection features on Apple devices.

My son’s iPad blocked him from calling me yesterday because it detected nudity somewhere along the line. I deactivated the feature now, but I doubt there was any nudity since he can only contact me, my wife, and his sister. I think it’s a false positive, and it just goes to show that CSAM scanning and 'Chat Control' are a load of BS.

#Apple #iOS #CSAM #ChatControl #Children

Andreas Kyacc143
2026-01-24

Or they could take the reports some UK CSAM protection groups have done, contact them, and delete the images these have identified -> nope, didn't do that either.

That's why I'm saying X is HOSTING . Not generating it anymore because I know that they claim to have fixed it.

Andreas Kyacc143
2026-01-24

@stuartl @selzero They "fixed" it in the sense that grok does not create new (as far you can fix LLM).

OTOH, AFAIK, they did not go so far as to go through the old images generated to see what is illegal and delete that. (It could be done more or less automatically -> e.g. run the prompts again through grok, if the prompt passes the safety layer, then keep the old image -> that's not 100% perfect, but it shows you care, and the couple of questionable images, well mistakes happen).

2026-01-24

Politico: Grok could have produced 3 million sexual deepfakes in 11 days, says estimate. “A study on the artificial intelligence chatbot Grok embedded on X estimates it created 3 million sexualized images in 11 days in January, including 23,000 of children. Meanwhile European regulators have yet to decide how to handle the explosion of nonconsensual deepfakes on the already embattled platform.”

https://rbfirehose.com/2026/01/24/politico-grok-could-have-produced-3-million-sexual-deepfakes-in-11-days-says-estimate/
Andreas Kyacc143
2026-01-24

@selzero Surely you understand that the real victims are rich (bastards) whose celebrity status could be blemished by that publication.

Did you notice BTW, that X is hosting a massive amount of well-documented and they not been shut down or blocked anywhere like any normal website that would do that?

Frankie ✅Some_Emo_Chick
2026-01-23

Grok could have produced 3 million sexual deepfakes in 11 days, says estimate

politico.eu/article/grok-x-3-m

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst