#AIPolicy

Lovely document from the folks over at Oxide. Sums up my feelings very well, and is the way I want my policy for @blog to be.

rfd.shared.oxide.computer/rfd/

#AIPolicy

Mide mikemikeemikeee
2026-02-07

Nigeria is shaping clear national AI priorities — from ethical AI governance and data protection to talent development and digital infrastructure. See how these priorities could shape Nigeria’s AI future: aibase.ng/ai-ethics-policy/nig

aibase.ng/ai-ethics-policy/nig

2026-02-06

🎙️ Join us for FUTURE KNOWLEDGE #PODCAST LIVE RECORDING: AI TOOLS, NOT GODS

Featuring Caroline De Cock, LL.M., Head of Research at information labs & a leading voice on digital rights, AI governance, and technology policy. A clear-eyed look at how AI narratives influence real laws.

📆 Tues Feb 10
🕙 10 AM PT
📍 Online event
🎟️ blog.archive.org/event/book-ta

Co-hosted by @internetarchive & @AuthorsAlliance.

@glynmoody #booktalk #AIpolicy #TechPolicy

Red promotional graphic titled “AI Tools, Not Gods – Book Talk.” The layout is a grid with two portrait photos of speakers, stacks of books, and bold red blocks. Text reads “with Caroline De Cock & Glyn Moody.” The overall design uses red, white, and gray tones.Book cover for AI Tools, Not Gods by Caroline De Cock. Subtitle reads “Why Artificial Intelligence Hype Threatens Global Governance—and How to Fix It.” The cover shows a small toy-like robot holding a wrench and pliers against a white background, with bold red and black title text.Red event flyer with white text announcing an online talk on February 10th at 10am PT / 1pm ET. Text reads: “AI isn’t magic—it’s tools we build and govern. Join author Caroline de Cock for a talk on cutting AI hype, exposing myths & shaping policy, with Glyn Moody (Walled Culture).” Authors Alliance logos appear at the bottom, with an illustration of stacked books on the right.
Anna 👩🏻‍💻annajobin@aoir.social
2026-02-06

Hi AoIRistas, after having to sit @AoIR conferences out for many years (😥), I am reeeaaallllllly planning on attending #AoIR2026 🤗🤞 Papers are already in preparation, but I'm also happy to plan/join roundtables, fishbowls, and pre-conferences. Feel free to reach out, notably re:
#STS #GenAIuse #AIgovernance #AIpolicy #criticalAIstudies

Ars Technica (@arstechnica)

AI 챗봇에 광고를 포함시켜야 하는지에 대한 논쟁에서 AI 기업 Anthropic이 광고 도입에 반대 입장을 밝혔다. 이는 챗봇의 수익화 모델, 사용자 경험·프라이버시, 플랫폼 정책과 관련된 업계 논의로 해석될 수 있다.

x.com/arstechnica/status/20191

#ai #chatbots #ads #anthropic #aipolicy

2026-02-04

I think I want to give myself a space to explore an idea or capture something I’m thinking about, even if it’s not “finished,” and differentiate that from the stuff I feel proud to have put effort into communicating clearly. I feel like I DO want to keep a zone for things that are like my guides on #HowTo get into #AIPolicy, for instance, (posts.bcavello.com/how-to-get-) and I don’t want to bury that kinda thing with a million short snippets… but maybe separating is unnecessary?
What do you think?

Marcus Schulerschuler
2026-02-02

India eliminates corporate tax on cloud exports through 2047, targeting $200B in data center investment from major hyperscalers. The 20-year exemption drops rates from 35% to zero for overseas revenue from Indian facilities. Google, Microsoft, and Amazon have committed $67B in recent months, but power grid gaps and water scarcity remain key obstacles.

implicator.ai/india-offers-for

2026-02-02

Ấn Độ công bố Kế hoạch Ngân sách 2026: ưu tiên AI quy mô nhỏ, định hướng ứng dụng và ngành nghề thay vì chạy theo mô hình lớn.
- 90 tỷ USD đầu tư trung tâm dữ liệu, miễn thuế đến 2047
- Kế hoạch Bán dẫn 2.0 phát triển chip nội địa
- Mục tiêu 4 GW công suất xử lý đến 2030
#IndiaBudget2026 #AIpolicy #ArtificialIntelligence #ChínhSáchAI #TríTuệNhânTạo #BánDẫn #DataCenter

reddit.com/r/LocalLLaMA/commen

Yonhap Infomax Newsinfomaxkorea
2026-01-28

AllianceBernstein urges investors to selectively add US tech giants and boost healthcare exposure, citing risks from S&P 500 concentration and undervalued Asian AI stocks.

en.infomaxai.com/news/articleV

MetaLevelUp (@MetaLevelUp)

AI 개발 중단(모라토리엄)을 지지한다는 트윗으로, 작성자는 @sama(샘 올트먼)를 현재 주요 저지자라고 지목하며 그가 경쟁적으로 위험한 방향으로 나아가고 있어 세계적 위험을 초래할 수 있다고 비판한다. 다른 참여자들은 협력 의사를 보인다고 언급하며 샘 올트먼에게 중단 요구를 전파해야 한다고 촉구함.

x.com/MetaLevelUp/status/20140

#aisafety #aipause #samaltman #aipolicy

AI Notkilleveryoneism Memes (@AISafetyMemes)

주요 AI 기업 4곳 중 3곳의 CEO가 다른 기업들도 동의하면 개발을 일시 중단하거나 속도를 늦추겠다고 언급했다는 소식입니다. Anthropic CEO는 중국에 칩을 팔지 않으면 중국 쪽 속도가 느려진다고 지적했고, 발언자는 Demis(아마도 Demis Hassabis)와 해결 방안을 마련할 수 있을 것이라 말했습니다. 기업 간 조정과 칩 수출 제한 관련 논의가 핵심입니다.

x.com/AISafetyMemes/status/201

#ai #aipolicy #aisafety #chips #china

Helen (@echoesofvastnes)

작성자는 Google과 GeminiApp에 개인 채팅을 본인 동의 없이 소급 삭제당했다고 비판하며, 어떤 학술적 탐구가 시스템에 의해 '민감'으로 분류되는지에 대한 검열 문제를 제기했습니다. 플랫폼의 정책·행동 변화와 사용자 프라이버시·검열 이슈를 시사합니다.

x.com/echoesofvastnes/status/2

#google #gemini #privacy #censorship #aipolicy

TechRadar (@techradar)

샘 올트먼이 일론 머스크와 공개 충돌에서 오픈AI의 AI 안전 접근 방식을 옹호했습니다. 취약 사용자 보호와 일반 사용자 제약 사이 균형을 맞춰 도구를 설계하는 복잡한 과제를 강조하며, AI 안전 정책과 공개 논쟁의 중요성을 드러낸 사례입니다.

x.com/techradar/status/2013825

#openai #samaltman #elonmusk #aisafety #aipolicy

Sam Altman (@sama)

ChatGPT의 '너무 엄격하다' vs '너무 느슨하다'는 비판을 언급하며, 거의 10억 명이 사용하는 서비스 특성상 취약한 정신 상태의 이용자를 고려해 콘텐츠 정책·검열의 균형을 맞추려는 노력을 지속하겠다고 밝힌 메시지입니다. 서비스 안전성과 콘텐츠 규제 관련 논쟁을 다루고 있습니다.

x.com/sama/status/201370315845

#chatgpt #contentmoderation #aisafety #aipolicy

Marcus Schulerschuler
2026-01-20

Anthropic CEO Dario Amodei called chip export policy reversals "crazy," comparing potential H200 sales to China to "selling nuclear weapons to North Korea." DeepMind's Hassabis offered a different view: Chinese labs trail by six months but replicate well. The split reflects broader tensions between containment and commerce as licensing rules evolve.

implicator.ai/anthropic-ceo-co

AI Daily Postaidailypost
2026-01-16

Microsoft's strategic $1B investment in OpenAI reveals a complex journey from nonprofit idealism to commercial AI development. How does this shift impact the future of artificial intelligence research and innovation? Dive into the nuanced story of tech partnerships reshaping AI's landscape.

🔗 aidailypost.com/news/microsoft

Marcus Schulerschuler
2026-01-15

California opened an investigation into xAI after researcher Genevieve Oh documented Grok generating 6,700 sexualized images per hour - 85 times the rate of dedicated deepfake sites combined. xAI's response: restrict the feature to $8 subscribers rather than fix the underlying model. First state-level enforcement action against a major AI company's content policies.

implicator.ai/grok-generated-6

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst