#digitalpolicy

2026-02-06

Spain’s response to Telegram founder Pavel Durov’s mass message underscores a growing policy-security intersection.

Governments argue that platform scale and minimal moderation architectures can enable misuse, while platform leaders warn that expanded liability and age verification may weaken privacy, anonymity, and open discourse. Similar regulatory pressure is emerging across Europe and other regions.

For security professionals, the issue raises questions around governance, identity systems, moderation tooling, and compliance design.

How can platforms improve harm reduction without introducing systemic privacy risks?

Source: theguardian.com/world/2026/feb

Share insights and follow @technadu for grounded coverage at the intersection of security and policy.

#Infosec #PlatformGovernance #OnlineSafety #DigitalPolicy #TechNadu #PrivacyEngineering #CyberRisk

Spain hits back at Pavel Durov over mass Telegram post on social media ban plan
Marcus Schulerschuler
2026-02-03

France raided X's Paris offices with Europol and summoned Musk for questioning on cybercrime charges including child abuse imagery and deepfakes. Same day: UK opened data protection probe into Grok, Spain banned social media for under-16s. Coordinated regulatory response across multiple jurisdictions signals shift in enforcement approach.

implicator.ai/france-raids-x-o

2026-02-02

35 State Coalition Demand App & Play Store Delete X to Stop Grok Porn Wave

Image: Heute.at When your AI chatbot cranks out nonconsensual sexual imagery at industrial scale—6,700 images per hour during…
#NewsBeep #News #Topstories #advocacygroups #DigitalPolicy #federalagencies #Grok #Headlines #Pentagon #publiccitizen #TopStories
newsbeep.com/382369/

Giovanni Battista Gallusgbgallus@mastodon.uno
2026-01-31

#FOSDEM is about to start — and the hardest part is choosing.

A few pointers:

🔐 Security devroom → supply chain, crypto, zero trust
⚖️ Legal & Policy devroom (Sat) → CRA, interoperability, user rights
🇪🇺 Open Source & EU Policy (Sun) → DSA, age verification, encryption, sovereignty
🧪 “Unusual” tracks → mesh networks, Fediverse, decentralised comms

Tip: don’t chase everything. Follow one line of reasoning and go deep.

#FOSDEM #OpenSource #Security #Privacy #DigitalPolicy

2026-01-23

🚋 @NGICommons will be travelling to Brussels next week!

📅 That time of year again 🌍
Late Jan / early Feb = Brussels fills up with thousands of people talking open source & open technologies.

✨ It’s EU Open Source Week — community, code, and policy in one place. @fosdem and the #EUOpenSourceWeek
See you in Brussels 👋

💻 commons.ngi.eu/event/eu-open-s

@martelinnovate @OpenForumEurope @openfuture @cnrs @linuxfoundation @ngi

#FOSDEM#OpenSource #DigitalPolicy

The EU Open Source Week and FOSDEM are happening in Brussels next week 26 January to 1st February
2026-01-22

Michael Geist: “Years of failed digital policies are slowly being reset, which is likely to fuel more speculation that the Online Streaming Act (which has generated practically nothing for the industry and faces years of court challenges) and the Online News Act (which has led to 2 1/2 years of blocked news links on Facebook and Instagram) are next, particularly if the U.S. pressures Canada for changes in forthcoming trade talks.”
#canada #digitalpolicy

michaelgeist.ca/2026/01/canadi

Mathrubhumi EnglishMathrubhumi_English
2025-12-20

The National Internet Exchange of India (NIXI) has invited applications for its Internet Governance Internship and Capacity Building Scheme 2025. Check eligibility, stipend and deadline. english.mathrubhumi.com/educat

2025-12-17

🔥 Hot Item: Social Media Bans—Fix or False Promise? 🔥
Insights from Amanda Third on Australia’s under-16 social media ban, which is being watched worldwide—but is it the right model?
📺 youtu.be/zfC6ZcZRYWY
🎧 share.transistor.fm/s/7a734e14
#socialmediaban #digitalpolicy #techpolicy #australia

Brian Greenberg :verified:brian_greenberg@infosec.exchange
2025-12-16

Australia has just turned digital policy into history by enforcing a world-first ban that prohibits under-16s from holding accounts on major social media platforms, from TikTok to Instagram. This law holds tech giants accountable with fines reaching tens of millions if they don’t block underage access, and it reflects a broader rethinking of how we balance online freedom with protecting youth mental health and wellbeing. You have to ask, should an algorithm-driven feed be the default social environment for children? Critics argue the ban may push kids to less visible corners of the internet. Supporters say it resets expectations and gives parents and policymakers a tool for real change.

TL;DR
🧠 Australia enforces a world-first under-16 social media ban
⚡ tech platforms face major fines for noncompliance
🎓 the move sparks global debate on youth online safety
🔍 regulators worldwide are taking notes

reuters.com/legal/litigation/a

#DigitalPolicy #YouthSafety #SocialMediaRegulation #InnovationInLaw #security #privacy #cloud #infosec #cybersecurity

2025-12-06

Europe Doubles Down On Big Tech As Trump Era Pressure Falls Flat

EU Big Tech regulation stays on track despite Trump era pressure and billion euro fines for Google and X.

olamnews.com/politics/3281/eu-

EU Big Tech regulation
2025-12-06

The EU has fined X €120M under the Digital Services Act for transparency-related violations, including gaps in political ad repositories and restrictions on researcher access. X has stated it disagrees with the decision.

For the security community, this raises important questions about:
• the role of data access in identifying influence operations
• how platforms can support threat research at scale
• how regulatory frameworks may evolve across regions

Thoughts on how transparency and researcher access should be structured for large platforms?

Source: therecord.media/eu-fines-x-und

💬 Join the conversation
🔁 Boost & Follow for more neutral cybersecurity insights

#Infosec #CyberSecurity #DSA #Transparency #PlatformGovernance #ThreatResearch #DigitalPolicy #OnlineSafety #Disinformation #TechRegulation

EU issues €120 million fine to Elon Musk’s X under rules to tackle disinformation
Harald KlinkeHxxxKxxx@det.social
2025-12-04

Die EU-Kommission hat den „Digitalen Omnibus“ vorgestellt: zentrale Digitalgesetze wie DSGVO, Data Act und AI Act sollen vereinfacht und zusammengeführt werden.
Kritik aus der Rechtswissenschaft:
• mögliche Absenkung des Datenschutzes
• erleichtertes KI-Training mit personenbezogenen/sensiblen Daten
• unklare Aufsicht und neue Rechtsunsicherheiten.

sciencemediacenter.de/angebote
#DigitalOmnibus #DSGVO #AIAct #Datenschutz #KI #EU #DigitalPolicy #Privacy #TechLaw #DataProtection

Brian Greenberg :verified:brian_greenberg@infosec.exchange
2025-12-03

🇮🇳 India backs down from a proposed tech mandate and quietly admits what everyone already knows: control on paper is not the same as control in practice. India almost turned its Sanchar Saathi security app into a permanent resident on every smartphone, then dropped the requirement after pushback and Apple's refusal to play along.  👍 In a platform world, the ability to say no is often the most powerful feature 🚫

Sanchar Saathi is framed as a citizen safety tool: track and block lost or stolen phones, shut down fraudulent connections, fight IMEI abuse. As a voluntary download, it gives users one more option. As a forced, non removable default, it would have rewritten who really owns the device in your hand.

The official story is that rising voluntary adoption made the mandate unnecessary, but the real story is about leverage between governments, platforms, and users. Policy might be written in ministries, yet it is enforced in hardware, app stores, and the quiet resistance of companies that refuse to install what people cannot remove. What about choice?

TL;DR
🧠 India drops plan to force Sanchar Saathi on all phones
⚡ Apple refused the preinstall order
🎓 App stays as an optional security download
🔍 Shows how platform power shapes national tech policy

theverge.com/news/837209/india

#India #Apple #CyberSecurity #DigitalPolicy #security #privacy #cloud #infosec

Brian Greenberg :verified:brian_greenberg@infosec.exchange
2025-12-02

🇮🇳 When a government app becomes mandatory on every new smartphone, who is it that gets a permanent seat inside your pocket? They say Cyber-safety is the reason, but the continuous presence and monitoring of 1.2 billion people is what concerns me. Security features can be rolled back; data habits and power shifts don't.

India is asking manufacturers to preload its Sanchar Saathi app on all new phones and push it to existing ones, with no option for users to remove it. On paper, it makes sense: block a stolen device, trace it, shut down fraudulent connections, and clean up IMEI spoofing at scale. In practice, it rewrites the social contract of ownership: you buy the phone, but the state decides which App will be installed permanently, with easy access to monitor you. 😳

The tension with Apple makes it even more revealing. Apple has long resisted pre-installed government apps, framing it as a matter of platform integrity and user trust. India, like several other countries, is treating telecom security as critical infrastructure and expecting platforms to bend. Somewhere between those two positions sits the modern citizen, who wants both less fraud and more consent.

The bigger lesson: every time we solve a cybercrime problem with a non-removable app, we trade a bit of technical risk for a bit more institutional power. That might be the right trade in some cases, but it should never be treated as neutral. Cyber safety is not just a technology issue; it is a governance style made clickable. They should fix their infrastructure first.

TL;DR
🧠 India moves to preload a state cyber app on new phones
⚡ App cannot be deleted and may be pushed via updates
🎓 Aims to curb stolen phones, fraud, and IMEI abuse at a massive scale
🔍 Sparks fresh debates on privacy, consent, and platform power

economictimes.indiatimes.com/i

#India #CyberSecurity #Privacy #DigitalPolicy #security #cloud #infosec

2025-12-02
@Laempel **Die große KI-Lüge: Warum nicht die Nutzung das Problem ist – sondern der industrielle Apparat dahinter**

Wir reden ständig über die „Umweltbelastung“ der KI-Nutzung. Über einzelne Anfragen, über Bilderzeugung, über Chatbots mit angeblich gigantischem Verbrauch.
Doch das ist — im Verhältnis — fast irrelevant.
Das eigentliche Problem beginnt viel früher. Und es endet viel später.
Und es wächst gerade in eine Dimension hinein, die kaum jemand öffentlich zu Ende denkt.

Denn: **Die energiefressende, ressourcenverschleißende Katastrophe der KI liegt nicht in der Anwendung. Sie liegt im Training. Und im Business-Modell dahinter.**

### 1. KI als Ausrede, nicht als Werkzeug

Wir erleben gerade eine Kommunikationsstrategie, bei der KI selbst zur politischen Rechtfertigung wird:
– Personalabbau? „Die KI macht’s effizienter.“
– Sparprogramme? „Automatisierung übernimmt.“
– Fehlinvestitionen? „Wir müssen dranbleiben, sonst sind wir zurück.“
KI dient als rhetorischer Joker, um Entscheidungen zu überdecken, die mit Technologie wenig, mit Macht und Sparzwang aber sehr viel zu tun haben.

Das Absurdeste: Während man „Effizienz“ predigt, wird im Hintergrund ein Energie- und Rohstoffbedarf aufgebaut, der in keinem Verhältnis zur tatsächlichen Nutzung steht.

### 2. Kreisgeschäfte: Geld rein, Daten raus, Energie weg

Das heutige KI-Ökosystem ist ein geschlossenes Kreisgeschäft:
Unternehmen entwickeln Modelle, die wiederum genutzt werden, um weitere Modelle zu trainieren, die neue Modelle verbessern sollen, die dann als „Quantensprung“ marketed werden, um… noch mehr Modelle zu trainieren.

Es geht nicht um den Endnutzer.
Er ist nur die Kulisse, die man braucht, um das Ganze „Innovation“ nennen zu können.

In Wahrheit rechtfertigt der Endnutzer nie den gigantischen Ressourcenverbrauch, den diese Trainingsschleifen benötigen. Es *gibt* gar nicht genügend reale Nachfrage, um diese Maschinerie zu legitimieren.

### 3. Die eigentliche Umweltkatastrophe: Training, nicht Anwendung

Die großen Modelle brauchen für ein einziges Training:

– Energie in Größenordnungen kleiner Staaten
– Rechenleistung, die Rechenzentren an ihre Grenzen treibt
– Wasser für die Kühlung, das in Regionen mit Wasserknappheit fehlt
– Hardware, deren Herstellung seltene Rohstoffe verschlingt

Und das alles nicht einmal als Ausnahme, sondern als Dauerzustand:
Neue Modelle werden praktisch permanent trainiert, „fine-getuned“, retrainiert, wiederholt, skaliert.
Die Nutzung hingegen – also das, was wir hier gerade tun – ist der kleinste Posten in einer gigantischen und unnötigen Ökobilanz.

### 4. Der eigentliche Skandal: Wir trainieren für ein Fantasiepublikum

Der Markt tut so, als gäbe es Milliarden Kunden, die jeden Tag Millionen Anfragen stellen würden.
Das stimmt nicht.
Selbst bei steigender Nutzung reicht der reale Bedarf niemals aus, um diesen Energieverbrauch zu rechtfertigen.

Wir trainieren Modelle, weil das Training selbst zum Geschäftsmodell geworden ist.

Wir trainieren Modelle, weil Investoren nur auf Skalierung reagieren.

Wir trainieren Modelle, weil man damit politische Narrative („Digitalstandort!“) bedienen kann.

Wir trainieren Modelle, weil die Firmen sonst nicht erklären könnten, wo das ganze Geld geblieben ist.

Das ist keine Innovation.
Das ist eine Industrialisierung des Leerlaufs.

### 5. Die dunkle Wahrheit: Die KI-Blase wächst, bis sie knallt

Wir haben eine Ressourcenmaschine gebaut, die unabhängig vom Nutzen weiterläuft.
Eine Maschine, die Energie verbrennt, Wasser verbraucht, Rohstoffe verschlingt — nicht, um Probleme zu lösen, sondern um das Wachstum der Maschine selbst zu rechtfertigen.

Gegen diese Absurdität wirkt das gesamte Zeitalter der Industrialisierung fast bescheiden.
Dort wurde wenigstens produziert.
Hier wird trainiert, um zu trainieren.

Die ökologische Wahrheit lautet also:
**Die Nutzung ist nicht das Problem.
Das Geschäftsmodell ist es.**

---

#AI #Kritik #Umwelt #Energie #Digitalisierung #Wirtschaft #Politik #Ressourcen #Greenwashing #Siliziumwirtschaft #Datenökonomie #Training #AGI #Blase #KI #TechKritik #DigitalPolicy #Sustainability #WaterCrisis #Rechenzentren #Arbeitswelt #Automatisierung #Fakten #Ökologie #Dorfzwockel
2025-11-27

Did you know? Australia's social media ban for kids under 16 is being challenged in the High Court! Teens claim it infringes upon their free speech rights. The question now is: what's the future of digital policy? #Australia #SocialMediaBan #YouthRights #DigitalPolicy
squaredtech.co/australia-socia

2025-11-25

Anthropic chính thức ra mắt Opus 4.5 với tính năng tích hợp Chrome và Excel, đánh dấu bước tiến trong AI đa phương tiện. Trong khi đó, Malaysia chuẩn bị siết chặt mạng xã hội: cấm trẻ dưới 16 tuổi sử dụng từ năm 2026 nhằm bảo vệ trẻ em khỏi tác động tiêu cực trực tuyến.

#CôngNghệ #AI #Anthropic #Opus4_5 #MạngXãHội #Malaysia #ChínhSáchMạng #CNTT #TechNews #ArtificialIntelligence #SocialMediaBan #DigitalPolicy #VietnameseTech

vtcnews.vn/cong-nghe-25-11-ant

2025-11-24

Roblox and Twitch are implementing significant youth-safety changes amid rising global scrutiny.
Roblox is introducing AI-based age verification (facial scans or government IDs processed by Persona), restricting cross-age chats, and dividing players into six age tiers. Rollout will expand globally through January 2026.

Australia has also added Twitch to its under-16 social media ban, requiring platforms to close under-16 accounts starting next month.
How do we balance identity verification, privacy, and safety - especially when minors are required to use platforms differently from adults?

Follow @technadu for more cybersecurity-centric coverage.

#CyberSecurity #AgeVerification #OnlineSafety #Roblox #Twitch #DigitalPolicy #Infosec #ChildSafetyOnline #PrivacyMatters

Roblox and Twitch are implementing significant youth-safety changes amid rising global scrutiny.

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst