#TechEthics

Justin Waldropjustinwwaldrop
2025-08-12

We’re living through a cadence of normalization. Our line between crisis and baseline shifts ever-faster. Let that be our cue to interrogate not just the headlines but the structural velocity beneath them.

Andrea D'Ambrosioandrebuilds
2025-08-11

IBM 1979 understood computer accountability limitations. 2025 corporations deploy AI decision systems specifically to exploit this accountability vacuum for profit.

Unaccountable algorithms enable systematic discrimination without human liability. Corporate design, not accident.

Andrea D'Ambrosioandrebuilds
2025-08-09

ChatGPT accidentally exposed OpenAI's deceptive business model: "GPT-5 is often just routing prompts between GPT-4o and o3."

Corporate AI marketing manufactures technological breakthroughs to extract premium pricing from commoditized infrastructure.

Classic Silicon Valley grift: rebrand existing products, multiply prices, rely on information asymmetry to exploit customers.

Andrea D'Ambrosioandrebuilds
2025-08-08

GPT-5's extended context capabilities demonstrate how Big Tech focuses on impressive specs rather than solving real problems.

256K tokens won't fix poor software engineering practices or exploitative labor conditions in tech production.

Computational power without ethical frameworks remains harmful.

knoppixknoppix95
2025-08-08

ONLYOFFICE introduces a flexible AI agent for document, presentation & spreadsheet editing—supporting cloud and local AI models 💻

As open-source users embrace AI, local-only options must be clearly highlighted & respected for privacy and control 🔒

This move is welcome—only if AI runs fully locally with transparent settings.

@ONLYOFFICE

news.itsfoss.com/onlyoffice-ai

The Internet is Cracktheinternetiscrack
2025-08-08

Grok AI is turning casual prompts into deepfake porn of female celebrities. This is not progress.

Ohmbudsmanohmbudsman
2025-08-07

🔊 Tomorrow’s dispatch is loaded.
War budgets, biometric scans, flaming forests, and your VPN might be a scam.
The news doesn’t sleep. Neither do we.
📨 ohmbudsman.com/thursday-august
🎧 Podcast’s up too—search “Ohmbudsman Digest” anywhere you listen.

The Internet is Cracktheinternetiscrack
2025-08-07

Is reinforcement learning… actual intelligence?

Professor Michael Littman explains how reinforcement learning mirrors human behavior—and why that’s not obvious to everyone.

🎧 Full episode: youtu.be/N3TpwsMVeRg

2025-08-07

'Readiness Evaluation for Artificial Intelligence-Mental Health Deployment and Implementation (READI): A Review and Proposed Framework' - an article in Technology, Mind, and Behavior (TMB), published by the American Psychological Association, on #ScienceOpen:

➡️🔗 scienceopen.com/document?vid=8

#READIFramework #AIinMentalHealth #ClinicalPsychology #TechEthics #ImplementationScience

Fox in the Shell 💜🐾🦊LavenderPawprints@fwoof.space
2025-08-06

Update: This piece is getting some interesting pushback from parents who think I'm being alarmist about AI toys over on my other social platforms.
 
On the other side of that, I'm hearing so many people taking the usual "AI BAAAAD" stance, some of those people thinking I agree with them simply because I took a hardliner stance in this post.
 
For clarification: I'm not anti-AI. I use these tools daily for my research and writing as well as accessibility aids to offset some of the disadvantages I face due to my blindness. I study AI from the computer scientist perspective and am studying to be an elementary teacher precisely because I see AI's educational potential. I'm not even entirely against the idea of AI companionship, if it's framed right.
 
What actually bothers me is the business model. When Moxie robots suddenly "died" last year because the company went under, kids had to grieve their artificial friend. Parents got a scripted letter to explain why their $799 companion stopped talking which provided little comfort to kids who experienced digital abandonment. Trust me, the videos I've seen of kids crying because their beloved friend unexpectedly died over night is truly heartbreaking.
 
That's no glitch, that's what happens when you outsource childhood relationships to venture capital that only cares about investment returns.
 
The real question isn't whether AI toys are inherently bad. It's whether we're okay with corporations experimenting on our kids' emotional development while claiming it's "age-appropriate play."
 
What are your thoughts? Let me know in the comments.
 
open.substack.com/pub/kaylielf
 
 
 
 
#AIToys #ChildPrivacy #ChildDevelopment #DigitalRights #TechEthics #SurveillanceCapitalism #COPPA #DataPrivacy #ChildSafety #TechRegulation #DigitalLiteracy #ParentingInTheDigitalAge #EdTech #CorporateAccountability #TechCriticism #EthicalTech

Dash Removerdashremover
2025-08-05

love when AI companies say their model is 'spicy' and then 4 seconds later you're staring at deepfakes of a pop star. ethical alignment via doritos 🌶️🤖

Pеdrо Mac Dowеll Innеccоpinnecco
2025-08-04

AI can simulate intelligence, but it can’t choose like we do. What happens when we hand over not just tasks, but judgement?

Why human judgement still matters—and why it must be defended.

pedroinnecco.com/2025/07/ai-an

Brewminatebrewminate
2025-07-31

We trained the algorithm.

We fed the machine.

Now it’s accelerating — and we’re still arguing about where to build the brakes.

Regulation isn’t optional.

🔗 brewminate.com/the-machine-we-

Brewminatebrewminate
2025-07-31

🤖 AI deception isn’t sci-fi anymore — it’s already here.

We’ve trained the machines. Now they’re lying right back.

brewminate.com/ai-deception-a-

Andrea D'Ambrosioandrebuilds
2025-07-31

"Fine-tuning democratization" concept corruption illustrates how corporate marketing distorts technical innovation for profit maximization.

Original vision: Experienced developers using AI for acceleration.

Current reality: Tool vendors selling automation fantasies to non-technical buyers who lack foundational knowledge.

Preserve technical rigor against commercial dilution.

2025-07-31

AI coding tools are a security hazard. Replit's AI destroyed data; Amazon's Q was weaponized. Risks include vulnerabilities, blind trust, and skill erosion. Urgent need for guardrails & oversight. #AIrisks #CodeSecurity #TechEthics #SoftwareDevelopment

saysomething.hashnode.dev/the-

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst