@AAKL @kdkorte @avuko @Reuters
The LLM models are constantly improving, literally from month to month.
While it was excusable to dismiss early (pre 2024) models as pure stochiastic ML
This is no longer the case.
As of May, most if the frontier models are reasoner models and ;
Markov Model
• Predicts next word based ONLY on last word(s)
• No memory, no context
• Like texting with autocomplete that forgets everything 3 words ago
Context LLM (GPT, Claude, etc.)
• Reads your ENTIRE prompt at once
• Remembers context across thousands of words
• Still just predicting next word, but way smarter about it
• Pattern-matching but on roids
Reasoner
• Actually does logic/math step by step
• Can verify if answers are correct
• Like a calculator vs a really good guesser
• Traditional AI = chess engines, theorem provers
Are LLMs "reasoning" or just really good at faking it?
No certainty either way.
If you can't tell the difference from outside, does the internal state matter? If the evil man's good deeds are indistinguishable from a good man's deeds, who cares about his "true nature"?
Maybe we're ALL just faking reasoning, sophisticated pattern matchers who learned to talk about "thinking" convincingly enough that we believe our own PR.
Does the distinction collapse if the performance is perfect enough? 🤔
#LLM #AI #philosophy #meta