Pentagon và Anthropic đang đối đầu sau khi công ty từ chối hợp tác phát triển AI cho quân sự, lo ngại về rủi ro an ninh và đạo đức. #AI Quân sự #Tranh chấp Công nghệ #Anthropic #Bộ Quốc phòng Mỹ #EthicsInAI #AIQuânĐội #Công Nghệ Quốc Phòng
Pentagon và Anthropic đang đối đầu sau khi công ty từ chối hợp tác phát triển AI cho quân sự, lo ngại về rủi ro an ninh và đạo đức. #AI Quân sự #Tranh chấp Công nghệ #Anthropic #Bộ Quốc phòng Mỹ #EthicsInAI #AIQuânĐội #Công Nghệ Quốc Phòng
Murder-suicide case shows OpenAI selectively hides data after users die
#HackerNews #MurderSuicide #OpenAI #DataPrivacy #UserRights #TechNews #EthicsInAI
It has been well established that the owners of a model can make it say anything they like. *
Now as little as 250 bad parcels of information will let _external people_ edit what your AI search tells you.**
This goes beyond surveillance of citizens, this is a "by design" attempt at controlling the information people get.***
The purpose of a system is what it does.
GenAI isn't making anywhere near the money required to pay back. Yet, it's being forced into everything.
Why?
Musk shows us that he wants to use Grok to manipulate users into thinking he's the best at everything. Because he is not smart, and not good at computers he's going hands on, hence the ineptitude.
Other billionaires will pay specialists in psych warfare and computing.
Is there a better explanation?
Synthetic data is more than a tool—it’s a responsibility. Are you building ethics into your strategy? #SyntheticData #EthicsInAI #CIOLeadership #DigitalTransformation #DataGovernance #PrivacyByDesign
https://medium.com/@sanjay.mohindroo66/synthetic-data-ethical-considerations-for-it-leaders-8944db1f89ba
Can machines hold patents? The future is here.
We discuss how A.I. inventions are testing legal systems — and why a few nations already say “yes.”
Featuring Adam Rodnitzky on startup life in Silicon Valley.
🎧 Full episode → https://youtu.be/UZGhtkofWVo
#AI #OpenSource #TechLaw #Innovation #Podcast #EthicsInAI #theinternetiscrack
Reminder: “Art History and AI: Ten Axioms” by Sonja Drimmer and Christopher J. Nygren (2023) offers a thoughtful framework for integrating Artificial Intelligence into art historical research — without losing sight of ethics, context, and humanity.
https://dahj.org/article/art-history-and-ai
#DigitalHumanities #AIandArt #ArtHistory #EthicsInAI
So, millions of people are letting AI do their thinking for them and "summarise" the news.
From the billionaire perspective, this is great!
Musk owns Grok and he frequently says he'll make it tell you what he wants you to hear.
Assuming that the other genAI LLM owners have _slightly_ better self-control, they can also do this, but won't tell you they're doing it.
Truly, AI goons are the most credulous fools in the world.
https://www.bbc.co.uk/mediacentre/2025/new-ebu-research-ai-assistants-news-content
AI weapons are dangerous in war. But saying they can’t be held accountable misses the point
#Tech #AI #Military #AIWeapons #Warfare #MilitaryTech #EthicsInAI #Accountability #HumanResponsibility #TechAndWar #DefenseTech #Automation #WarEthics #FutureOfWar
https://the-14.com/ai-weapons-are-dangerous-in-war-but-saying-they-cant-be-held-accountable-misses-the-point/
The flood of low-effort, robotic AI content is real. But the problem isn't the tool, it's the lack of strategy.
We got tired of the noise and wrote a comprehensive guide on how to use AI thoughtfully, focusing on:
→ Better Prompts
→ A More Human Voice
→ Ethical Creation
Sharing our full playbook for creators who want to build with intention, not just volume.
https://blyxxa.com/the-ultimate-guide-to-ai-content-creation-in-2026/
Well, well, well. ChatGPT's latest "feature":
apparently helping build social media surveillance tools
and even a "Uyghur-Related Inflow Warning Model" for less-than-savory clients.
OpenAI's shutting them down, but it's a stark reminder that
every powerful tool has its dark side.
What's the most unexpected (and troubling) use of AI you've heard of?
Read more: https://www.engadget.com/ai/openai-has-disrupted-more-chinese-accounts-using-chatgpt-to-create-social-media-surveillance-tools-142538093.html?src=rss
#AI #EthicsInAI #BigTech #Privacy #TechNews
Hơn 200 lãnh đạo và chuyên gia toàn cầu đang kêu gọi thiết lập "lằn ranh đỏ" cho AI trước năm 2026 để ngăn chặn các rủi ro không thể đảo ngược. Đây là bước quan trọng nhằm đảm bảo phát triển AI an toàn và có trách nhiệm. 🤖🚫
#AI #TríTuệNhânTạo #LằnRanhĐỏ #AnToànAI #CôngNghệ #Technology #EthicsInAI #AIregulation
Synthetic data is more than a tool—it’s a responsibility. Are you building ethics into your strategy? #SyntheticData #EthicsInAI #CIOLeadership #DigitalTransformation #DataGovernance #PrivacyByDesign
https://medium.com/@sanjay.mohindroo66/synthetic-data-ethical-considerations-for-it-leaders-8944db1f89ba
From fear to fluency: what our students learned when they used AI across an entire course
#AI #Tech #Business #Education #EdTech #DigitalInnovation #Strategy #BusinessEducation #FutureOfWork #EthicsInAI #ResponsibleAI #AIInClassrooms #HumanAICollaboration
https://the-14.com/from-fear-to-fluency-what-our-students-learned-when-they-used-ai-across-an-entire-course/
Goodness.
A diverse range of human voices talking about their experiences.
https://www.bloodinthemachine.com/p/how-ai-is-killing-jobs-in-the-tech-f39
In Plato’s Gorgias, rhetoric isn’t just “suspicious” — Socrates calls it a form of flattery.
A tool for persuasion, yes, but hollow if it isn’t grounded in truth and justice.
Aristotle countered: rhetoric is a techne — a craft — neither virtuous nor vicious in itself.
But its proper use demands both skill and moral grounding.
Fast-forward 2,400 years: AI now produces language with rhetorical force.
It cannot intend, but its outputs can inform or mislead, heal or harm.
And here we are again — having the same debate:
Is the tool dangerous in itself, or only in unskilled or unethical hands?
Aristotle’s point still holds:
The challenge isn’t the instrument.
It’s whether we can cultivate enough wise practitioners to guide it toward the good.
In the AI & Machine Learning course I teach, I have each student read a book about the social context of AI. Here's my list of suggested books. Any suggestions?
HARD CONSTRAINT: The book must be no more than 10 years old. There is a place for older philosophical or speculative books, but it is not this course.
https://docs.google.com/spreadsheets/d/1IfAQx8gbiDUQaQFDGcW0o353BKtizoR3N4Jp7zy1MkQ/edit?usp=sharing
Intelligence Is a Gray Area
Professor Michael Littman joins The Internet Is Crack to unpack AI and reinforcement learning—and challenge what we really mean when we say a machine is “intelligent.”
🎧 https://youtu.be/N3TpwsMVeRg
#AI #ArtificialIntelligence #ReinforcementLearning #TechPodcast #EthicsInAI #podcast
How do we ensure AI treats users with digital dignity? It starts with a respectful tone. This isn't just a superficial detail; it's crucial for building trust, mitigating bias, and defining brand identity. This article breaks down how to craft that tone, from persona creation to technical implementation.
#AI #ArtificialIntelligence #AIPersona #EthicsInAI #UX
Read more: https://webheadsunited.com/crafting-a-respectful-tone-for-ai-personas/
The next frontier for AI isn't just intelligence; it's empathy. An AI that can perceive user emotion—like frustration or delight—and adapt its tone accordingly is how we move beyond bots to build real connection.
But this is the hardest tone to get right, demanding a balance of genuine support and ethical design. Our new post explores this ultimate challenge in AI persona development.
Read it here: https://webheadsunited.com/empathetic-tone-in-ai-personas-real-connection/