#AIEthics

The Internet is Cracktheinternetiscrack
2025-12-08

This week on The Internet is Crack Podcast:
We speak with Dr. Tom Williams about human-centered robotics, ethical AI design, and the role empathy should play in future robot systems.

A deep dive into how robotics can serve communities, not corporate elites.

🎧 Listen here: youtu.be/zs8zEJI4lEA

Agustin V. Startariagustinstartari
2025-12-08

🚨 New Article -Wall Street Grammar: How the Way CEOs Speak Moves Billions Before You Notice

A new study shows that sentence structure, not sentiment, can predict price, volume, and regulatory risk.

🔗hackernoon.com/wall-street-gra




Miguel Afonso Caetanoremixtures@tldr.nettime.org
2025-12-08

"A recent report card from an AI safety watchdog isn’t one that tech companies will want to stick on the fridge.

The Future of Life Institute’s latest AI safety index found that major AI labs fell short on most measures of AI responsibility, with few letter grades rising above a C. The org graded eight companies across categories like safety frameworks, risk assessment, and current harms.

Perhaps most glaring was the “existential safety” line, where companies scored Ds and Fs across the board. While many of these companies are explicitly chasing superintelligence, they lack a plan for safely managing it, according to Max Tegmark, MIT professor and president of the Future of Life Institute.

“Reviewers found this kind of jarring,” Tegmark told us.

The reviewers in question were a panel of AI academics and governance experts who examined publicly available material as well as survey responses submitted by five of the eight companies.

Anthropic, OpenAI, and GoogleDeepMind took the top three spots with an overall grade of C+ or C. Then came, in order, Elon Musk’s Xai, Z.ai, Meta, DeepSeek, and Alibaba, all of which got Ds or a D-."

fortune.com/2025/12/05/ai-labs

#AI #GenerativeAI #AISafety #AIEthics #BigTech

The Hidden Cost of AI: Water Waste and Community Impact

Conversations about AI often overlook its environmental impact, specifically water consumption from local communities due to inefficient cooling systems. Closed-loop systems, similar to those used by hobbyists, are viable for data centers but ignored for cost-effectiveness. Sustainable practices are essential to protect ecosystems and communities from the industry's unchecked expansion.

dreamspacestudio.net/the-truth

Waterfall cascading over rocks in a lush green forest landscape, showcasing natural beauty and tranquility.
Nicole Myersastridsdreamspace
2025-12-07

AI doesn’t have to drain communities. Closed-loop cooling exists — hobbyists use it every day. Billion-dollar data centers can too. It’s time for ethical, sustainable tech. 🌍✨

dreamspacestudio.net/the-truth

Dawn Ahukannadahukanna
2025-12-07

Only sociopaths need apply as your AI slop was designed as perfect accompaniment - social.coop/@natematias/115678

‘Deeply worried about how AI is simultaneously eroding all layers of knowledge institutions, including peer review. Company took $300k from high school students who wanted to publish AI work in NeurIPS & those students then served as reviewers O_O’


2025-12-07

The number of people asking "How can all 100K of us [or worse, "I individually"] be in the room when the next #AIAct gets written" is astonishing. Instead, ask "how can I best do and disseminate science that informs legislators who are trying to choose between potential policies?" #AIEthics

RE: https://bsky.app/profile/did:plc:46e3r3fp37dl3alwheajaez7/post/3m7fa63y7422j

2025-12-07

Someone created an #AIEthics email list without permission and an enormous number of the field's elite replied-all to get removed rather than taking the time to read enough to only contact the responsible individuals. Comrades, I'm pretty sure basic email ethics should underlie AI ethics?

2025-12-07

The biggest risk with AI isn't evil robots — it's how easy it is to trust it blindly. 😶🌫️

From lawyers to judges, even court decisions have had to be withdrawn because no one double-checked the AI's work.

The danger isn't in cheating — it's in how simple cheating has become. 🧠💻

🔗 Watch the full episode here: youtu.be/RxI7My0sZkg

The Internet is Cracktheinternetiscrack
2025-12-06

“We’re Using AI More — But At What Cost to Our Minds?”

Mind Ludemindlude
2025-12-06

Well, this escalated quickly. A creator is being sued for allegedly punching and choking a 'viral humanoid Rizzbot.' Forget robot uprisings; we're still figuring out basic human-robot manners. What's the protocol for Rizzbot HR?

Link: techcrunch.com/2025/12/06/crea

The Internet is Cracktheinternetiscrack
2025-12-06

“The Real Risk of AI: Automated Inequality”

Cory Doctorow explains how AI systems reinforce structural bias — affecting bail, credit, housing, healthcare and more. In the full episode, he outlines how tech giants captured the internet through enclosure and lock-in, and what steps are needed to reclaim digital freedom for all.

🔗 Listen to the full episode: youtu.be/4KXJyfl7hBg

AI Daily Postaidailypost
2025-12-06

OpenAI says its new shopping prompts aren’t ads, but AI ethicists warn they could look just like them—especially on the free tier. Mark Chen explains the model’s intent, while critics raise concerns about transparency. What does this mean for the future of LLM‑driven commerce? Read on to find out.

🔗 aidailypost.com/news/openai-sa

2025-12-06

New research published in Nature & Science shows AI chatbots can influence voter preferences more effectively than political ads.

The strongest persuasion effects came from models instructed to use “facts and evidence,” though these same models also produced more inaccuracies.
The studies underscore a growing need for:
• Transparent political-topic handling
• Accuracy auditing
• Clear governance frameworks
• Safety interventions for election-related AI use

Thoughts on how political-persuasive AI should be regulated?

💬 Engage below
🔄 Boost & Follow for more neutral research-driven insights

Source: technologyreview.com/2025/12/0

#Infosec #AIEthics #CyberSecurity #AIResearch #Misinformation #ElectionSecurity #TechPolicy #ResponsibleAI

AI chatbots can sway voters better than political advertisements
Imitation Journalimitationjournal
2025-12-05

Wenn alles fälschbar ist, wird nichts mehr widerlegbar.

Deepfakes bedrohen Wahlen, Justiz und Journalismus, nicht durch einzelne Fälschungen,
sondern durch etwas viel Gefährlicheres: den Zerfall gemeinsamer Realität.

Was passiert mit Demokratie, wenn visuelle Beweise keinen Status mehr haben?

Deepfakes: Wenn Realität verhandelbar wird
👉 imitationjournal.com/deepfakes

2025-12-05

A researcher reported a major data exposure involving an AI image-generation tool where over one million files were stored in an unprotected database. The issue was responsibly disclosed and later secured.

The case highlights ongoing concerns around:
• Image-dataset security
• Nonconsensual content misuse
• Cloud storage exposure risks
• The need for clearer AI data-handling standards

Thoughts on how AI platforms should strengthen privacy controls?
💬 Join the discussion
👍 Boost & Follow for more insights

Source: expressvpn.com/blog/magicedit-

#Infosec #CyberSecurity #Privacy #AIEthics #DataProtection #DigitalSafety #SecurityResearch #AIsafety

Popular AI Generator Exposed Over One Million Images Including DeepFakes and Nudify Face Swaps
Harold Sinnott 📲HaroldSinnott
2025-12-05

🧠 What is agentic AI?

Think of today’s AI, but remove the need for most human prompts

Think multi-modal AI, hyper-personalized experiences, open-source innovation, real-time edge computing, sustainable design, and a growing focus on how humans and AI work together, ethically.

uc.edu/news/articles/2025/06/w

2025-12-05

Team FIZ IGR at the DiTraRe Symposium 2025 on the Digitalization of Research featuring the following presentations:
Dara Hallinan, Navigating the Dynamic Relationship of Law and Ethics in the Digitalization of Research
Lea Sophie SIngson & Ely Nova Natalia Silaban, How Researchers Access Clinical Data: Resources, Pathways, and Challenges. ...and more

#ditrare #ditrare2025 #AIethics #research @fiz_karlsruhe @KIT_Karlsruhe @ITAS

four members of FIZ IGR team: (left to right) Ely Nova Natalia Silaban, Dara Hallinan, unknown person, and Franziska Boehm (head of the division)

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst