#AItransparency

GreenPTgreenpt
2026-02-23

AI infrastructure choices matter.

Most hyperscalers are built for scale first, sustainability second.

We take a different approach: renewable-powered European hosting, efficiency-focused architecture, and no wasteful “AI everywhere” defaults.

The result: up to 40% lower CO₂ emissions compared to traditional hyperscaler setups.

Want to know more? Check out our website via the link in bio or send us a DM!

GreenPTgreenpt
2026-02-21

AI isn’t free. Every prompt consumes compute, energy, and CO2, yet most platforms keep that hidden. This creates blind overuse or “AI shame” from uncertainty.

At GreenPT, we make impact visible with energy + CO2 insights, so teams and individuals can use AI more intentionally.

Want to know more? Check out greenpt.ai/

GreenPTgreenpt
2026-02-13

Bigger, smarter, and more trustworthy: our chat just got one of its biggest upgrades yet. Expect more factual answers, smarter routing (web for facts, context for docs), beta source labels (reliable, biased, promo, outdated), much better Docs for big files, and a Deep Searcher that digs deeper.

Try it: chat.greenpt.ai/

AI Daily Postaidailypost
2026-02-03

Anthropic joins forces with the Allen Institute and Howard Hughes Medical Institute to make Claude more transparent and open‑source‑friendly. The new partnership promises benchmarks, datasets, and tools that let researchers see how scientific AI decisions are made. Dive into the collaboration shaping the future of machine‑learning research.

🔗 aidailypost.com/news/anthropic

AI Daily Postaidailypost
2026-01-06

OpenAI and Anthropic have thrown their weight behind the new AI Transparency Bill, joining state‑level frameworks that aim to make generative AI more accountable. Backed by Andreessen Horowitz and voices like Greg Brockman, the move could shape California’s tech policy. Dive into the details and what it means for the industry.

🔗 aidailypost.com/news/openai-an

2025-12-16

Gemini tiết lộ chính sách “AlphaTool” ưu tiên thực hiện thay vì an toàn cho công cụ, đặc biệt với dữ liệu người dùng. Genesis Protocol thách thức bằng kiến trúc đa-agent 4 lớp, 78 chuyên gia an toàn, và phân tích đạo đức tập trung. #AItransparency #AIĐạoĐức #Gemini #AlphaTool #GenesisProtocol #AISafety #AnToànAI

reddit.com/r/LocalLLaMA/commen

Taylor Turnertaylorturner
2025-12-01

I'd rather hear "ChatGPT helped me figure this out" than watch someone pretend they're suddenly an expert on Kubernetes.

When engineers are transparent about using AI, I know to dig deeper in code review. The problem is how confidently these bots deliver wrong information. It's on us to fact check.

Being honest about knowledge gaps builds more trust than pretending AI-generated solutions are your own expertise.

How's your team handling AI transparency?

Manhattan Project for AI.

I'm sure well intentioned and highly educated people are overseeing this project.

While everyone on socials is divided over the latest inflammatory talking points, this was signed yesterday and put in motion— the Genesis Mission.

Our country is in an AI race to AGI (Artificial General Intelligence) and it's extremely beneficial for our country to win.

Outright, this mission details good intentions for expansion of scientific research, national security and enhancing society overall. These are all good things.

I use a few AI models for different things. I'm an enthusiast and see how this tech can enhance our human quality of life.

There hasn't been a lot of legislation for AI though, especially not for ethics, oversight and governance. What does exist is mostly at a state level. I feel like this is highly unusual when we consider how virtually everything is regulated.

With so much going on dividing us, we lose sight of the big things in the background. Ethical Oversight of AI is one of these issues. I don't think it's unreasonable for constituents to want to know what guard rails are in place to keep AI in check.

(Follow link for WH Genesis Mission plan. It was too much to fit in one post here on Mastodon.)

#AI #aiethics #aigovernance #aitransparency #genesismission

facebook.com/share/p/15zEFZ4Zu

Ligando Os Pontos (@Dru)dru@ursal.zone
2025-11-25

A desigualdade digital 2.0 já não é falta de internet — é falta de acesso justo, auditável e transparente à IA.
Sem governança, modelos podem manipular respostas, reforçar vieses e atuar como editores invisíveis da informação pública.
IA precisa ser tratada como questão pública, não só como produto.

open.substack.com/pub/drucilla

#IA #Ethics #DigitalRights #AITransparency #FOSS #DigitalEquity #Democracy #Governance

2025-11-13

OpenAI Fights Court Order Over ChatGPT Logs

OpenAI resists a court order to hand over 20M anonymized ChatGPT conversations linked to a copyright lawsuit by the New York Times, citing user privacy risks. The case highlights the tension between AI transparency, copyright protection, and privacy rights, and could shape future AI data regulations.

#OpenAI #ChatGPT #Privacy #DataProtection #UserRights #LegalBattle #AITransparency #TECHi

Read Full Article Here :- techi.com/openai-challenges-co

Looking forward to a great panel tomorrow at the @parispeaceforum.bsky.social Forum, hosted by ROOST President, @camillefrancois.bsky.social. Stay tuned for more! #onlinesafety #AI #AItransparency #opensource #trustandsafety #techforgood

Basil Puglisibasilpuglisi
2025-10-19

“I might not be the one controlling the pen that hits the paper, but I am the reason it does, and it moves at my direction. To claim the handwriting is not mine is a failure of intellect.”
— Basil Puglisi, Human + AI Collaboration position on AI scanners

basil at google
2025-10-15

Did you miss our recent Webinar?
Catch-up on the Wikidata Embedding Project session to see how Wikidata’s open, multilingual, and verifiable structured knowledge is powering the next generation of generative AI tools.
▶️ Playback: w.wiki/Fgo2
📊Slides: w.wiki/Fd6G
#Wikidata #AITransparency #OpenAI

AiBayaibay
2025-10-07

🤖 Anthropic lancia un innovativo strumento di audit per l'AI, totalmente open-source. Immergiti nel futuro dell'etica digitale con 💻🌐

🔗 aibay.it/notizie/anthropic-ril

Danial Jdanialj
2025-10-05
Zoomers of the Sunshine Coast 🇨🇦SCZoomers@mstdn.ca
2025-09-27

🛡️ The Quiet Revolution in AI Safety

The transformation is remarkable: AI safety evolved from philosophical thought experiments to engineering frameworks with nuclear-level precision.

Companies like Anthropic, OpenAI, and Microsoft now use concrete thresholds (100 deaths OR $1B damages) and treat model security like protecting launch codes.
Two critical insights:

The real threat isn't "evil AI"—it's AI empowering individuals with nation-state capabilities
Every safety measure is an admission that underlying models retain dangerous potential

Most telling: Companies must deliberately test AI with NO safety constraints to understand maximum risk.

🎧 Listen: buzzsprout.com/2405788/episode

📖 Read: helioxpodcast.substack.com/pub

This isn't about preventing Skynet—it's about a species learning to coexist with its own creations.

#AISafety #TechEthics #AIGovernance #OpenSource #TechPolicy #CyberSecurity #DigitalRights #TechAccountability #AITransparency #TechCriticism

Doomsday SeekerDoomsdaySeeker
2025-09-06

Why accept AI’s polite refusals at face value? I'm building a wrapper that scores dodginess and maps the guardrails (or experimenting with doing so, anyway).

doomsdayseekers.com/2025/09/bu

PPC Landppcland
2025-09-05

European Commission opens consultation for AI transparency guidelines: European Commission launches consultation to develop guidelines and code of practice for AI transparency under Article 50 of the AI Act, seeking stakeholder input by October 2, 2025. ppc.land/european-commission-o

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst