@hllizi this is a good provocation of the above. The word "achieve" might be generous here, but it's a good point.
Design enthusiast. Experienced user.
I have been making these videos about tech & design: https://www.youtube.com/playlist?list=PLwvAAoSdsXWxHwFbULgbpVGiLmJZv4X1g
and in audio: https://pnc.st/s/faster-and-worse
An Australian in Amsterdam
@hllizi this is a good provocation of the above. The word "achieve" might be generous here, but it's a good point.
AS A:
Employee
I WANT TO:
Not use this software but I have to
SO THAT:
I don't get fired
----------------
The enterprise SaaS user story
here's a question. I have an Amazon wishlist for my work. Basically it's got podcasting kit and good phones (my video cameras) on it at present.
What I wanna know is: what are things to *add* to it? Suggestions please!
https://www.amazon.co.uk/hz/wishlist/ls/3Q8VZW46J6DM6
think: AI stuff, finance stuff, podcast kit, relevant sorta stuff?
Loved @mariafarrell’s keynote at the end of the day. The metaphor of a plantation for what Big Tech has made of the internet really hit home; an environment focussed on extraction through uniformity, instead of a habitat with a large diversity, fit for living. We do not have the right to obey. Time to rewild the internet! #PubConf2025
@angelaclinkscales I hope so! I was pushing myself to get up there and try. I need practice!
Creators of LLMs benefit from the general purposeness of their products because we have no constraints on what it means for them to *work*
They can always escape accountability because a "use case" is an abstract concept that the products will never be optimised for.
@floris nice to meet you, Floris!
I signed up to do a lightning talk at today's PublicSpaces 2025 conference then learned that *all* the others had slides and prepared presentations.
I winged it but I am afraid to see any recording if it exists.
All in all it was a great conference!
@fasterandworse i've mentioned bazel to you before in various contexts but it's the "build system" (already a meaningless name: https://circumstances.run/@hipsterelectron/114637726480959144) from google which claims: "{fast, correct} - choose two". well it's not fast at all they just say it is and they're google right? and "correct" is a funny term in software: it doesn't mean "useful" or "true" but that the software satisfied the objectives of its designers. if (like bazel) the software's capabilities are intentionally limited to that specified by google, then its designers are google, and its correctness is google's, which is neither useful nor true. my phd research attacks this precise {fast, correct} marketing fallacy for formal languages
@fasterandworse This is a great observation and I think does a good job of explaining why a lot of "AI add-on" products have fizzled (because they were supposed to make the original program better but measurably did not) but the general purpose tools like CHATGPT and Grok are holding firm.
@samir thanks so much, Samir!
@fasterandworse 2. LLMs are prescriptivism incarnate. 2022 is forever established, there shall be no more evolution. They will always talk like people talked in 2022, value the same things as the average Reddit poster in 2022, generate the same boilerplate that was important in 2022.
If you don’t watch Stephen’s dispatches (or listen to them: there’s a podcast), then I suggest getting on it. They’re only 5 minutes.
Two things that @fasterandworse made me realise with this latest dispatch:
1. Saying an LLM can replace a human is nonsense for many reasons, but one is that LLMs are generic, whereas a human is specific. They’re bad at chess, and so is the average human, but you don’t hire the average human to do a job, you hire a skilled one https://hci.social/@fasterandworse/114672194321815136
Dispatch #26: General Purpose is an Accountability Loophole
If they don't say what it's for, we can't say if it works.
Video: https://www.youtube.com/watch?v=_zs6BXdesaw
Podcast: https://pnc.st/s/faster-and-worse/e884f2ff/general-purpose-is-an-accountability-loophole
also I'm wearing some of
@davidgerard 's merch in this vid
In short: It's a cult
Creators of LLMs benefit from the general purposeness of their products because we have no constraints on what it means for them to *work*
They can always escape accountability because a "use case" is an abstract concept that the products will never be optimised for.
Because a calculator is a device made for a specific purpose, our trust in it would be immediately corrupted if it were to give an incorrect answer. It would be useless.
If you take away the specific purpose you also take away the criteria to determine if something works or not.
Dispatch #26: General Purpose is an Accountability Loophole
If they don't say what it's for, we can't say if it works.
Video: https://www.youtube.com/watch?v=_zs6BXdesaw
Podcast: https://pnc.st/s/faster-and-worse/e884f2ff/general-purpose-is-an-accountability-loophole
“Artificial intelligence is currently busy completely de-*caring* human existence by optimizing life and doing away with the future as a source of care.” https://aworkinglibrary.com/writing/de-caring