I can decide if I should waste a bunch of AI clock cycles on pointless gibberish, or if that will just help drive up engagement numbers and make it easier to convince investors to keep throwing money at this ponzi scheme š¤
@remixtures We've already seen teen suicide directly encouraged by a replica bot.
This technology is NOT safe for this purpose.
#ai #aiharm #aiethics
@alexisperrier Just donāt let all that awesomeness let you forget that there are also companies like character.ai that marketed chatbots with anthropomorphic design characteristics to kids as young as 12, resulting in an actual chatbot-groomed suicide case, which is now going to the courts.
#ai #chatbots #anthropomorphic #aiharm
https://www.humanetech.com/podcast/what-can-we-do-about-abusive-chatbots-with-meetali-jain-and-camille-carlton
14-year old kid commits suicide: āTo be with his #AI girlfriendā šØ
Heād been talking about self-harm with the bot for a long time, about ādisconnecting with his current realityā, yet in the last moments, the #bot says: ābe with meā
Iāve said it before, I will say it again: This #technology is not ready. This technology is not safe for deployment for those underage.
We need to stop fantasizing about AGI risk, and start talking about #aiharm
#kids #characterai
https://open.spotify.com/episode/4ksKxpSW9fMPNgUqHFcTLG
Tuesday, Iāll flee D.C.ās 90-something temperatures for the 100-something temperatures of Las Vegasābut as Iāve realized over previous trips to that desert city for the Black Hat information-security conference, it really is a dry heat.
In addition to the posts below, my Patreon readers got a recap of a very long day of travel on Thursday of the previous week that saw me returning home about 21 hours after Iād stepped off of the front porch that morning.
7/30/2024: These Are the Services Seeing the Biggest Uptick in Passkey Adoption, PCMag
What I thought would be an easy writeup of an embargoed copy of a Dashlane study about passkey adoption among users of that password manager wound up enlightening me about Facebookās support of that authentication standard. And once again, I found Facebookās documentation out of date and incorrect.
7/31/2024: Hereās How Microsoft Wants to Shield You From Abusive AIāWith Help From Congress, PCMag
I had ambitions of attending this downtown-D.C. event Tuesday afternoon featuring Microsoftās vice chair and president Brad Smith, but my schedule ran away from me and I watched the proceedings online. And then I didnāt finish writing this piece until Wednesday morning, although that at least let me nod to news that day of the impending introduction of a new bill targeting AI impersonations of people.
8/2/2024: Circuit Court Throws a Stop Sign in Front of FCCās Net-Neutrality Rules, PCMag
Reading this unanimous opinion from three judgesāone named by Clinton, another a Biden appointeeāthat the Federal Communications Commission didnāt have the authority to put broadband providers into one of two possible regulatory buckets left me feeling like Iād been taking crazy pills over the last 20 years of the net-neutrality debate, during which the FCC has repeatedly done just that.
8/3/2024: Justice Department Sues TikTok, Alleging Massive Child-Privacy Violations, PCMag
I woke up Saturday thinking that somebody at PCMag was already covering the DOJ lawsuit against TikTok, but nobody had grabbed that story. So I set aside part of that morning to read the DOJās complaint, get a comment out of a TikTok publicist and write this post summarizing the departmentās allegations.
#AI #AIHarm #BradSmith #childPrivacy #COPPA #Dashlane #deepfakes #FacebookPasskeySupport #FCC #majorQuestionsDoctrine #Microsoft #netNeutrality #passkey #passkeys #TikTok
I'm glad to know that the U.S. government, the open-source community, and some private companies are working on governance for AI.
Also, the cow is out of the barn, halfway across the pasture, and in the process of being mutated by bad actors through misuse, malware injection, and other dark methods - beyond the reach of whatever the good actors can come up with in terms of a unified approach to protect society.
Worth a read:
https://owasp.org/www-project-top-10-for-large-language-model-applications/llm-top-10-governance-doc/LLM_AI_Security_and_Governance_Checklist-v1.1.pdf
#ai #aiharm #aigovernance #stayhuman
A clear and present danger: https://www.washingtonpost.com/technology/2024/01/20/openai-dean-phillips-ban-chatgpt/ #aiharm #stayhuman
The Artificial Intelligence Incident Database.
The AI Incident Database is dedicated to indexing the collective history of harms or near harms realized in the real world by the deployment of artificial intelligence systems. Like similar databases in aviation and computer security, the AI Incident Database aims to learn from experience so we can prevent or mitigate bad outcomes.
ICYMI, here's how AI goes MAD: https://www.tomshardware.com/news/generative-ai-goes-mad-when-trained-on-artificial-data-over-five-times
Again, a little louder for the people in the back - AND THE PEOPLE IN FRONT:
AI is not inherently dangerous. The people who implement it without considering its impact and designing for safety inherently *are*.
#aiharm #ethicalai #aiethics #stayhuman
https://www.theverge.com/2023/10/31/23940298/ai-generated-poll-guardian-microsoft-start-news-aggregation?mc_cid=d8c8445d86&mc_eid=3f556a867c
Unregulated tech and its latest AI pestilence is receiving pushback.
Ecstatic to see this lawsuit againt OpenAI instigated by #PaulTremblay and #MonaAwad
Keep it coming!
Hear me out:
STAY HUMAN.
The future won't be Skynet, it will be @pluralistic's "enshittification" if we all fall for this whole AI ruse.
So let's adopt a catchphrase against loss, a mnemonic for what actually matters, a slogan for the have-nots who'll be flattened by the AI juggernaut.
Let's spraypaint it across the wreckage that AI is going to make of every real thing we hold dear in capitalism's headlong rush to monetize human laziness, imperfection, and greed.
The real 'divide' isn't between AI 'Safety' & AI 'Ethics'. It's between 'risk' & 'harm.'
Risk mitigation is a firm-focused corporate/technical/anti-litigation strategy.
Harm reduction is people-focused, based on acknowledging, investigating, mitigating, ending, repairing harm.
Here's a very sober - and sobering - overview of the problem with rushing into AI adoption, presented this morning by Center for Humane Technology.
AI is:
- Hallucinating
- Affirming wrong information
- Leaving unprivileged people behind
- Setting unprivileged people up for significant harm
- Easily abused by bad actors
- (watch this)
https://www.youtube.com/watch?v=yuLfdhrGX6k
#ai #aiharm #aidesign #aiart #chatgpt #aihype #whatguardrails
Hereās a little privacy and data control cosplay from OpenAI https://www.wired.com/story/how-to-delete-your-data-from-chatgpt/
Summary is roughly only Europe and Japan, fill out a form show harm, maybe we will remove, but also maybe we will not. Didnāt say your content will come out just the query wonāt. Sounds more like on demand prompt mods for selected queries to avoid litigation?
Seriously if this is the best they can do itās gonna get bumpy real bumpy very soon.
As humans reveal more about what AI chatbots "know," I'll say (once more, shouting into a megaphone for those in the back):
Trusting your judgment, decisions & voice to a Q&A "oracle" that averages "the internet" is not "living the future," let alone "using artificial intelligence."
You are abdicating your power and sensibility to software that happily feeds you disinformation, bigotry & outright fiction.
In scientific terms, you are "being stupid."
https://www.washingtonpost.com/technology/interactive/2023/ai-chatbot-learning/ #ai #aiharm
Enjoy my happy robot friend tagging good and bad things. Itās so harmless because itās a cartoon!
āAIā robot friend would never ever misclassify YOU denying YOU a loan, job opportunity, or YOUR ability to organize for action. If that ever did happen you should know it only happens to OTHER people who arenāt YOU according to robot friendās owners.