#EthicsInAI

PJ "chinga la migra" CoffeyHomebrewandhacking@mastodon.ie
2025-10-24

So, millions of people are letting AI do their thinking for them and "summarise" the news.

From the billionaire perspective, this is great!

Musk owns Grok and he frequently says he'll make it tell you what he wants you to hear.

Assuming that the other genAI LLM owners have _slightly_ better self-control, they can also do this, but won't tell you they're doing it.

Truly, AI goons are the most credulous fools in the world.

bbc.co.uk/mediacentre/2025/new

#AI #EthicsInAI #Claude #grok #musk

🕯️Fresh from the oven, a new book I co-edited with my colleagues and friends Jordi Vallverdú and Vicent Costa.

“There is no such thing as a natural death: nothing that happens to a man is ever natural, since his presence calls the world into question. All men must die: but for every man his death is an accident and, even if he knows it and consents to it, an unjustifiable violation.”
— Simone de Beauvoir, A Very Easy Death (1965)

In the digital age, death feels increasingly strange to us — as if were moving farther away, abstracted, outsourced.

"SecondDeath. Experiences of Death Across Technologies" explores how artificial intelligence, robotics, and digital systems are reshaping our experience of mortality.

In a world mediated by algorithms and cybernetic agents, death is no longer only biological. It becomes symbolic, synthetic, sometimes programmable.

The question is no longer if machines can die, but what their death reveals about our own: about grief, identity, continuity, and the boundaries of consciousness.

This volume gathers philosophical, cultural, scientific, ethical, psychological, and technological perspectives to rethink one of humanity’s most ancient enigmas — through the lens of our most recent inventions.

To confront death is, once again, to rethink life, our reality itself.

link.springer.com/book/9783031

#SecondDeath #Philosophy #AI #Robotics #DigitalEthics #DeathStudies #Technology #CognitiveScience #Bioethics #Neuroethics #Psychiatry #Psychology #Thanatology #DigitalHumanities #CulturalStudies #Cybernetics #ArtificialLife #HumanMachineInteraction #AIEthics #Transhumanism #Posthumanism #Neuroscience #Existentialism #PhilosophyOfMind #EthicsInAI #ScienceAndSociety #academia

Blyxxablyxxa
2025-10-09

The flood of low-effort, robotic AI content is real. But the problem isn't the tool, it's the lack of strategy.

We got tired of the noise and wrote a comprehensive guide on how to use AI thoughtfully, focusing on:
→ Better Prompts
→ A More Human Voice
→ Ethical Creation

Sharing our full playbook for creators who want to build with intention, not just volume.

blyxxa.com/the-ultimate-guide-

A human hand and a robot hand connecting, symbolizing the thoughtful, strategic collaboration needed for high-quality AI content creation.
Mind Ludemindlude
2025-10-07

Well, well, well. ChatGPT's latest "feature":
apparently helping build social media surveillance tools
and even a "Uyghur-Related Inflow Warning Model" for less-than-savory clients.

OpenAI's shutting them down, but it's a stark reminder that
every powerful tool has its dark side.

What's the most unexpected (and troubling) use of AI you've heard of?

Read more: engadget.com/ai/openai-has-dis

2025-09-23

Hơn 200 lãnh đạo và chuyên gia toàn cầu đang kêu gọi thiết lập "lằn ranh đỏ" cho AI trước năm 2026 để ngăn chặn các rủi ro không thể đảo ngược. Đây là bước quan trọng nhằm đảm bảo phát triển AI an toàn và có trách nhiệm. 🤖🚫

#AI #TríTuệNhânTạo #LằnRanhĐỏ #AnToànAI #CôngNghệ #Technology #EthicsInAI #AIregulation

vietnamnet.vn/hon-200-lanh-dao

PJ "chinga la migra" CoffeyHomebrewandhacking@mastodon.ie
2025-08-19

Goodness.

A diverse range of human voices talking about their experiences.

bloodinthemachine.com/p/how-ai

#AI #EthicsInAI

2025-08-10

In Plato’s Gorgias, rhetoric isn’t just “suspicious” — Socrates calls it a form of flattery.
A tool for persuasion, yes, but hollow if it isn’t grounded in truth and justice.

Aristotle countered: rhetoric is a techne — a craft — neither virtuous nor vicious in itself.
But its proper use demands both skill and moral grounding.

Fast-forward 2,400 years: AI now produces language with rhetorical force.
It cannot intend, but its outputs can inform or mislead, heal or harm.
And here we are again — having the same debate:
Is the tool dangerous in itself, or only in unskilled or unethical hands?

Aristotle’s point still holds:
The challenge isn’t the instrument.
It’s whether we can cultivate enough wise practitioners to guide it toward the good.

#philosophy #EthicsInAI #criticalthinking

2025-08-08

In the AI & Machine Learning course I teach, I have each student read a book about the social context of AI. Here's my list of suggested books. Any suggestions?

HARD CONSTRAINT: The book must be no more than 10 years old. There is a place for older philosophical or speculative books, but it is not this course.

docs.google.com/spreadsheets/d

#ai #MachineLearning #ethics #EthicsInAI #privacy #bias

The Internet is Cracktheinternetiscrack
2025-08-08

Intelligence Is a Gray Area

Professor Michael Littman joins The Internet Is Crack to unpack AI and reinforcement learning—and challenge what we really mean when we say a machine is “intelligent.”

🎧 youtu.be/N3TpwsMVeRg

WebHeads Unitedwebheadsunited
2025-08-04

How do we ensure AI treats users with digital dignity? It starts with a respectful tone. This isn't just a superficial detail; it's crucial for building trust, mitigating bias, and defining brand identity. This article breaks down how to craft that tone, from persona creation to technical implementation.

Read more: webheadsunited.com/crafting-a-

Two hands shaking to show respect.
WebHeads Unitedwebheadsunited
2025-07-31

The next frontier for AI isn't just intelligence; it's empathy. An AI that can perceive user emotion—like frustration or delight—and adapt its tone accordingly is how we move beyond bots to build real connection.

But this is the hardest tone to get right, demanding a balance of genuine support and ethical design. Our new post explores this ultimate challenge in AI persona development.

Read it here: webheadsunited.com/empathetic-

A drawing of a caregiver and a caregivee with a beige background.
2025-07-29

What does it actually mean when we say that generative AI raises ethical questions?
🔵 Dr. Thilo Hagendorff, our research group leader at IRIS3D, has taken this question seriously and systematically. With his interactive Ethics Tree, he has created one of the most comprehensive overviews of ethical problem areas in generative AI: lnkd.in/ebzZYaU7
More than 300 clearly defined issues – ranging from discrimination and disinformation to ecological impacts – demonstrate the depth and scope of the ethical landscape. This “tree” does not merely highlight risks, but structures a field that is increasingly under pressure politically, technologically, and socially.
Mapping these questions so systematically underlines the need for ethical reflection as a core competence in AI research – not after the fact, but as part of the epistemic and technical process.

#GenerativeAI
#AIethics
#ResponsibleAI
#EthicsInAI
#TechEthics
#AIresearch
#MachineLearning
#AIgovernance
#DigitalEthics
#AlgorithmicBias
#Disinformation
#SustainableAI
#InterdisciplinaryResearch
#ScienceAndSociety
#IRIS3D

WebHeads Unitedwebheadsunited
2025-07-28

A neutral tone in AI isn't boring; it's a strategic choice. It builds trust, ensures global accessibility, and prevents misinterpretation. But crafting it is a challenge. It's a fine line between objectivity and a cold, robotic feel.

Our latest post explores the art and science of the neutral AI persona, from prompt engineering to the ethics of adaptive tonality.

Read more here: webheadsunited.com/introducing

A swatch of plaster in a neutral color tone.
The Internet is Cracktheinternetiscrack
2025-07-22
The Internet is Cracktheinternetiscrack
2025-07-20

AI Is Reshaping Big Law—and Raising Big Questions

AICERTsAicerts11
2025-07-11

AI in Law: Revolution or Risk?

Legal AI isn’t about replacing lawyers—it’s about amplifying them.

📚 It sifts through case law in seconds.
🧠 It spots patterns in contracts humans might miss.
⏱️ It cuts research time from hours to minutes.

But with great power comes great liability:

Who’s accountable for AI-generated legal advice?

Can AI truly understand context and nuance?

Are the laws ready for legal AI?
store.aicerts.ai/certification

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst