#systemsAI

2025-11-09

BTW, not evident from the picture, but the linkedin post at the head of this thread complements @rollingstone.com for accurately describing the #systemsAI #systemsEngineering steps Grok (the corporate entity) took to address the harms. See slide incl. alt text for Musk's ignorance of how LLM work.

A slide about how large language models work. I'd been drawing this on whiteboards for my students, then made a slide at a consciousness meeting where philosophers were confused why LLM never say they aren't sentient. [I hope it's evident that this is informed conjecture, not verbatim truth.] Thus the 3 people in the history of the Internet who ever wrote "I'm not sentient" are out in the fringes of the data, as is the one person who ever said "Check it out, MechaHitler!" However, Musk's attempt to use a second set of guardrails to push Grok right also pushed it towards the under-informed data fringe with the MechaHitler comment. The bottom of the slide says "False statements are not lies, nor hallucinations; just adequate predictions from an inadequately informed part of the data space."
2025-08-29

"Moments [after the crash] the data was automatically “unlinked” from the 2019 Tesla Model S at the scene, meaning the local copy was marked for deletion, a standard practice for Teslas in such incidents." Another one straight into the syllabus, argh. #AIAct #systemsAI #GiftArticle

Joanna Bryson, blatheringj2bryson
2025-08-29

I was going to toot "even if never had the data, THAT would be obstruction of justice." AI companies are morally and in the EU at least legally obliged to keep evidence of having followed due diligence. But look at THIS smoking gun: "Moments [after the crash] the data was automatically “unlinked” from the 2019 Tesla Model S at the scene, meaning the local copy was marked for deletion, a standard practice for Teslas in such incidents."

wapo.st/46dCoQI

RSS News Feednews@rss.dfaria.eu
2025-08-15

dev.dfaria.eu: Chris brings in some of the #AIEthics literature i -

Chris brings in some of the #AIEthics literature including cites to @davidgunkel.bsky.social & @floridi.bsky.social, but we mostly focus on his expertise in Weberian bureaucracy and governance, and mine on (moral) agency; devops; and systems design, engineering and administration. #systemsAI

https://bsky.app/profile/j2bryson.bsky.social/post/3lwgzfl3qq22x

2025-08-15

Chris brings in some of the #AIEthics literature including cites to @davidgunkel.bsky.social & @floridi.bsky.social, but we mostly focus on his expertise in Weberian bureaucracy and governance, and mine on (moral) agency; devops; and systems design, engineering and administration. #systemsAI

2025-07-09

#genAI is engineered; changes can be rolled back just like any other code. #AI is NOT just the data, but the reason there is no real way to produce what Musk is looking for is because the subset of data that is closer to what he wants is spewed by objectionable people. #systemsAI #AIEthics #grok

RE: https://bsky.app/profile/did:plc:cfy5rgqvohpdqxgu2geb5u2b/post/3ltihqbl4ec2u

2025-05-21

Here are the collected Parnas memos. He did a lot of work for the military, he was no peacenik. They are truly interesting documents of a) communication from academia to the military b) about system engineering / #systemsAI and the importance of real-time testing. web.stanford.edu/class/cs99r/...

web.stanford.edu/class/cs99r/re...

2025-03-22

I heard in 2023 that companies were cleaning up their acts to comply with #DSA & similar laws globally. I’ve also been telling governments & corporations both for years that any lack of such competence was culpable negligence. I sincerely believe having their #systemsAI in order will HELP companies.

RE: https://bsky.app/profile/did:plc:iw4ngu7e6vevjog34kermab3/post/3lkxgnoe6rk2f

2025-03-11

- Be ready for the human to take over - The biggest advantage is speed of development - LLMs amplify existing expertise I dislike the anthropomorphic use of "them" and "conversation", but LOVE that Simon takes the time to fully document and share his experiences. #genAI #systemsAI #AIEthics

RE: https://bsky.app/profile/did:plc:mro7axagquvjt63foaqzddjx/post/3lk4apexzzf62

2025-02-16

A lot of people have been calling for more agile government, but it turns out have never read the 12 principles of the Agile Manifesto. Read it. ALL software must be codeveloped with those who it will serve. agilemanifesto.org #systemsAI #agileAI #AIDevOps #AIEthics the topics of my PhD, FWIW 2/2

Manifesto for Agile Software D...

2025-02-11

Meredith talks about mystification of AI leading to people not applying standard systems engineering techniques required by sectors eg military, nuclear, people aren’t taking normal standard security such verification. #AIActionSummit #SommetActionIA #systemsAI

2025-01-20

Insight while marking #AIEthics exams: people are very hung up on figuring out exactly why an AI system might have had a bad "idea" / constructed a bad plan. IMO we should worry more about how a bad plan could come to be executed, and for how long, and with what redress. #systemsAI #AIGovernance

2025-01-15

Whenever someone tells you what AI is going to do, you know they are lying to you because AI is just a software engineering technique. Believe rather people who talk to you about what we should and shouldn't allow people to do with AI. #AIEthics #systemsEngineering #systemsAI #AIPolicy

Joanna Bryson, blatheringj2bryson
2025-01-15

Whenever someone tells you what AI is going to do, you know they are lying to you because AI is just a software engineering technique. Believe rather people who talk to you about what we should and shouldn't allow people to do with AI.

Joanna Bryson, blatheringj2bryson
2024-10-17

In retrospect, it was interesting to hear Google's efforts to educate German business and government into using their generative tools. Are they just trying to become a consultancy, a necessary component of business & government, or are they also still trying to gather information? How can they work to assure us they only build, not coopt, our intelligence? is a problem for tech companies and governments both.

Joanna Bryson, blatheringj2bryson
2024-06-02

The is helping innovation by improving . This is exactly what the and will do assuming the entities formerly known as GAFAM don't succeed in derailing EU . tandfonline.com/doi/full/10.10

Joanna Bryson, blatheringj2bryson
2024-05-14

The best way to program a robot

Joanna Bryson, blatheringj2bryson
2023-11-03

Would Turing have believed in xrisk, if he knew what we know now? Todd Holloway is expert in industrial-level ; Dermot Turing on history of AI, including his uncle. Immodestly, I'd recommend our talk over Musk & Sunak's.
youtube.com/watch?v=o9bmWsJSocg

Joanna Bryson, blatheringj2bryson
2023-09-17
Joanna Bryson, blatheringj2bryson
2023-08-22

@kgajos When you and I were PhD students, smart people were arguing that MAS (multi agent systems) was the right software engineering technique to program complex systems, because we could leverage our social intelligence. IMO that really interesting / conjecture got lost in the race to create ridiculous 'languages' (communication protocols) for the software agents, missing the fact that humans mostly model each other and say relatively little. 1/

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst