On correct but wrong responses
For a research project I am currently evaluating all kinds of generative AI models (mostly for visual artifacts but some text based ones as well). There also is somewhat of a push at my employer to use those systems more because of "efficiency".
So we all know that LLMs fabricate facts meaning: They produce text that is factually untrue. Happens a lot, those so-called hallucinations are a structural property of those kinds of systems. But I kept wondering about something else that I keep […]
https://tante.cc/2025/04/09/on-correct-but-wrong-responses/