#AIHarm

2025-05-08

I can decide if I should waste a bunch of AI clock cycles on pointless gibberish, or if that will just help drive up engagement numbers and make it easier to convince investors to keep throwing money at this ponzi scheme šŸ¤”

#AIGrift #AIHarm #AI

2025-02-01

@remixtures We've already seen teen suicide directly encouraged by a replica bot.

This technology is NOT safe for this purpose.
#ai #aiharm #aiethics

2024-11-09

@alexisperrier Just don’t let all that awesomeness let you forget that there are also companies like character.ai that marketed chatbots with anthropomorphic design characteristics to kids as young as 12, resulting in an actual chatbot-groomed suicide case, which is now going to the courts.

#ai #chatbots #anthropomorphic #aiharm
humanetech.com/podcast/what-ca

2024-10-28

14-year old kid commits suicide: ā€To be with his #AI girlfriendā€ 😨

He’d been talking about self-harm with the bot for a long time, about ā€disconnecting with his current realityā€, yet in the last moments, the #bot says: ā€be with meā€

I’ve said it before, I will say it again: This #technology is not ready. This technology is not safe for deployment for those underage.

We need to stop fantasizing about AGI risk, and start talking about #aiharm

#kids #characterai
open.spotify.com/episode/4ksKx

2024-08-05

Tuesday, I’ll flee D.C.’s 90-something temperatures for the 100-something temperatures of Las Vegas–but as I’ve realized over previous trips to that desert city for the Black Hat information-security conference, it really is a dry heat.

In addition to the posts below, my Patreon readers got a recap of a very long day of travel on Thursday of the previous week that saw me returning home about 21 hours after I’d stepped off of the front porch that morning.

7/30/2024: These Are the Services Seeing the Biggest Uptick in Passkey Adoption, PCMag

What I thought would be an easy writeup of an embargoed copy of a Dashlane study about passkey adoption among users of that password manager wound up enlightening me about Facebook’s support of that authentication standard. And once again, I found Facebook’s documentation out of date and incorrect.

7/31/2024: Here’s How Microsoft Wants to Shield You From Abusive AI–With Help From Congress, PCMag

I had ambitions of attending this downtown-D.C. event Tuesday afternoon featuring Microsoft’s vice chair and president Brad Smith, but my schedule ran away from me and I watched the proceedings online. And then I didn’t finish writing this piece until Wednesday morning, although that at least let me nod to news that day of the impending introduction of a new bill targeting AI impersonations of people.

8/2/2024: Circuit Court Throws a Stop Sign in Front of FCC’s Net-Neutrality Rules, PCMag

Reading this unanimous opinion from three judges–one named by Clinton, another a Biden appointee–that the Federal Communications Commission didn’t have the authority to put broadband providers into one of two possible regulatory buckets left me feeling like I’d been taking crazy pills over the last 20 years of the net-neutrality debate, during which the FCC has repeatedly done just that.

8/3/2024: Justice Department Sues TikTok, Alleging Massive Child-Privacy Violations, PCMag

I woke up Saturday thinking that somebody at PCMag was already covering the DOJ lawsuit against TikTok, but nobody had grabbed that story. So I set aside part of that morning to read the DOJ’s complaint, get a comment out of a TikTok publicist and write this post summarizing the department’s allegations.

https://robpegoraro.com/2024/08/04/weekly-output-passkey-adoption-ai-safety-net-neutrality-doj-v-tiktok/

#AI #AIHarm #BradSmith #childPrivacy #COPPA #Dashlane #deepfakes #FacebookPasskeySupport #FCC #majorQuestionsDoctrine #Microsoft #netNeutrality #passkey #passkeys #TikTok

Mack Reedmackreed
2024-04-16

I'm glad to know that the U.S. government, the open-source community, and some private companies are working on governance for AI.

Also, the cow is out of the barn, halfway across the pasture, and in the process of being mutated by bad actors through misuse, malware injection, and other dark methods - beyond the reach of whatever the good actors can come up with in terms of a unified approach to protect society.

Worth a read:
owasp.org/www-project-top-10-f

Dr. Ocharo :mastodon:šŸ‡°šŸ‡Ŗ šŸ‡ÆšŸ‡µsavvykenya
2023-12-13

The Artificial Intelligence Incident Database.

The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.

incidentdatabase.ai/

Mack Reedmackreed
2023-10-31

Again, a little louder for the people in the back - AND THE PEOPLE IN FRONT:

AI is not inherently dangerous. The people who implement it without considering its impact and designing for safety inherently *are*.


theverge.com/2023/10/31/239402

Claire Phillipsclairep
2023-07-05

Unregulated tech and its latest AI pestilence is receiving pushback.

Ecstatic to see this lawsuit againt OpenAI instigated by and

Keep it coming!

theguardian.com/books/2023/jul

Mack Reedmackreed
2023-06-13

Hear me out:

STAY HUMAN.

The future won't be Skynet, it will be @pluralistic's "enshittification" if we all fall for this whole AI ruse.

So let's adopt a catchphrase against loss, a mnemonic for what actually matters, a slogan for the have-nots who'll be flattened by the AI juggernaut.

Let's spraypaint it across the wreckage that AI is going to make of every real thing we hold dear in capitalism's headlong rush to monetize human laziness, imperfection, and greed.

Sasha Costanza-Chockschock
2023-06-07

The real 'divide' isn't between AI 'Safety' & AI 'Ethics'. It's between 'risk' & 'harm.'

Risk mitigation is a firm-focused corporate/technical/anti-litigation strategy.

Harm reduction is people-focused, based on acknowledging, investigating, mitigating, ending, repairing harm.

Mack Reedmackreed
2023-06-07

Here's a very sober - and sobering - overview of the problem with rushing into AI adoption, presented this morning by Center for Humane Technology.

AI is:
- Hallucinating
- Affirming wrong information
- Leaving unprivileged people behind
- Setting unprivileged people up for significant harm
- Easily abused by bad actors
- (watch this)
youtube.com/watch?v=yuLfdhrGX6k

2023-05-09

Here’s a little privacy and data control cosplay from OpenAI wired.com/story/how-to-delete-

Summary is roughly only Europe and Japan, fill out a form show harm, maybe we will remove, but also maybe we will not. Didn’t say your content will come out just the query won’t. Sounds more like on demand prompt mods for selected queries to avoid litigation?

Seriously if this is the best they can do it’s gonna get bumpy real bumpy very soon.

#AIHarm #AI #ChatGPT4

Mack Reedmackreed
2023-04-19

As humans reveal more about what AI chatbots "know," I'll say (once more, shouting into a megaphone for those in the back):

Trusting your judgment, decisions & voice to a Q&A "oracle" that averages "the internet" is not "living the future," let alone "using artificial intelligence."

You are abdicating your power and sensibility to software that happily feeds you disinformation, bigotry & outright fiction.

In scientific terms, you are "being stupid."

washingtonpost.com/technology/

2023-03-30

Enjoy my happy robot friend tagging good and bad things. It’s so harmless because it’s a cartoon!

ā€œAIā€ robot friend would never ever misclassify YOU denying YOU a loan, job opportunity, or YOUR ability to organize for action. If that ever did happen you should know it only happens to OTHER people who aren’t YOU according to robot friend’s owners.

#AIHype #AIHarm

Mack Reedmackreed
2023-03-29
Mack Reedmackreed
2023-03-24

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst