So @scalzi@mastodon.social did an experiment where he asked all of the major LLM #artificial-#ignorance engines who he had dedicated his book The Consuming Fire to.
Anthropic's Claude admitted it didn't know.
Every other LLM engine MADE UP AN ANSWER.
Fuckin' seriously, people, NEVER TRUST AN LLM TO CORRECTLY ANSWER A QUESTION YOU DON'T ALREADY KNOW THE ANSWER TO. Unless you specifically trained the LLM explicitly on the subject matter you intend to ask it questions about.
There are tasks that LLMs are insanely good at, if correctly and appropriately trained. But when it comes to general knowledge, a general-purpose LLM will give you an answer THAT LOOKS AS THOUGH IT MIGHT BE CORRECT.
That's what they're designed to do. Not to KNOW. To LOOK CONVINCING.
Never forget that. No matter what bullshit the 'AI' techbros feed you. You will never achieve true #artificial #intelligence just by feeding an LLM more stolen books and content. Not happening. Period. Intelligence is more than just grammatically correct stochastic regurgitation.



