#lrm

Agustin V. Startariagustinstartari
2025-06-12

LLMs obeyed.
LRMs execute.
No subject. No reference. No intention.
Only resolution.

New article by Agustín V. Startari
📄 From Obedience to Execution: Structural Legitimacy in the Age of Reasoning Models
DOI: doi.org/10.5281/zenodo.15635364
Series:

2025-06-11

Limity LLM a LRM. Docela dobra analyza od Apple. #ai #LLM #LRM
marigold.cz/item/limity-lrm/

LLMs and LRMs are both useful, but if humans struggle with complex problems, don’t expect AI to be flawless. Or is this Apple’s clever distraction to mask the limits of #AppleIntelligence? Food for thought. #AI #AGI #LLM #LRM #TechEthics #Apple #MachineLearning #ArtificialIntelligence

Advanced AI suffers ‘complete ...

Aubreader MastoAubreader@mas.to
2025-06-10

A knockout blow for LLMs? - by Gary Marcus - Marcus on AI

garymarcus.substack.com/p/a-kn

LLM “reasoning” is so cooked they turned my name into a verb

#AI #LLM #LRM

2025-06-10

If attention is all a machine needs to learn from text, then trusticion is all a human needs to learn from a machine that has learned from text.
linkandth.ink/p/trusticion-is-

#AI #LLM #LRM

2025-06-10

Esto no debería ser novedad para nadie, pero ahí está. Amontonar más recursos a quemar en IA generativa no la hará mejor, sólo quemará más recursos.

theguardian.com/commentisfree/

The educator panic over AI is real, and rational.
I've been there myself. The difference is I moved past denial to a more pragmatic question: since AI regulation seems unlikely (with both camps refusing to engage), how do we actually work with these systems?

The "AI will kill critical thinking" crowd has a point, but they're missing context.
Critical reasoning wasn't exactly thriving before AI arrived: just look around. The real question isn't whether AI threatens thinking skills, but whether we can leverage it the same way we leverage other cognitive tools.

We don't hunt our own food or walk everywhere anymore.
We use supermarkets and cars. Most of us Google instead of visiting libraries. Each tool trade-off changed how we think and what skills matter. AI is the next step in this progression, if we're smart about it.

The key is learning to think with AI rather than being replaced by it.
That means understanding both its capabilities and our irreplaceable human advantages.

1/3

#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy

AI isn't going anywhere. Time to get strategic:
Instead of mourning lost critical thinking skills, let's build on them through cognitive delegation—using AI as a thinking partner, not a replacement.

This isn't some Silicon Valley fantasy:
Three decades of cognitive research already mapped out how this works:

Cognitive Load Theory:
Our brains can only juggle so much at once. Let AI handle the grunt work while you focus on making meaningful connections.

Distributed Cognition:
Naval crews don't navigate with individual genius—they spread thinking across people, instruments, and procedures. AI becomes another crew member in your cognitive system.

Zone of Proximal Development
We learn best with expert guidance bridging what we can't quite do alone. AI can serve as that "more knowledgeable other" (though it's still early days).
The table below shows what this looks like in practice:

2/3

#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy

Critical reasoning vs Cognitive Delegation

Old School Focus:

Building internal cognitive capabilities and managing cognitive load independently.

Cognitive Delegation Focus:

Orchestrating distributed cognitive systems while maintaining quality control over AI-augmented processes.

We can still go for a jog or go hunt our own deer, but for reaching the stars we, the Apes do what Apes do best: Use tools to build on our cognitive abilities. AI is a tool.

3/3

#AI #Education #FutureOfEducation #AIinEducation #LLM #ChatGPT #Claude #EdAI #CriticalThinking #CognitiveScience #Metacognition #HigherOrderThinking #Reasoning #Vygotsky #Hutchins #Sweller #LearningScience #EducationalPsychology #SocialLearning #TechforGood #EticalAI #AILiteracy #PromptEngineering #AISkills #DigitalLiteracy #FutureSkills #LRM #AIResearch #AILimitations #SystemsThinking #AIEvaluation #MentalModels #LifelongLearning #AIEthics #HumanCenteredAI #DigitalTransformation #AIRegulation #ResponsibleAI #Philosophy

A large table comparing unassisted critical reasoning vs "Cognitive Delegation", leveraging AI for higher order thinking.
Artair Geal :trek: :bearpride:artair@ohbear.wtf
2025-06-10
2025-06-09
2025-06-09

Large Reasoning Models (LRMs) fail to use explicit algorithms and reason inconsistently across puzzles (for high complexity tasks).

machinelearning.apple.com/rese

#lrm #llm #towersofhanoi

ComputerBaseComputerBase
2025-06-09

„Die Illusion des Denkens“: Wie limitiert Reasoning-Modelle wie o3 und Claude 3.7 sind computerbase.de/news/apps/die-

Arie van Deursen 🇳🇱🇪🇺🟥avandeursen@mastodon.acm.org
2025-06-09

“Through extensive experimentation across diverse puzzles, we show that frontier Large Reason Models face a complete accuracy collapse beyond certain complexities.

Moreover, they exhibit a counter-intuitive scaling limit: their reasoning effort increases with problem complexity up to a point, then declines despite having an adequate token budget.”

machinelearning.apple.com/rese

#llm #lrm

2025-06-09

Maybe we should first understand how humans think, before attempting to discredit the thinking process of LRMs. Because I’m still not convinced that our thinking process is any more sophisticated than that of LRMs. We’re just more efficient and better at it.

Because all these wrong conclusions, hallucinations and “failures to use explicit algorithms and inconsistent reasoning across puzzles” that LRMs have, are very human.

#llm #apple #lrm

Bluszcz 🇵🇱 🌱🎥📷🚲👨‍💻➡️🦌bluszcz@pol.social
2025-06-08

I am checking #Claude Desktop it is sort of quite smart with few mcps added.

Next step would be to duplicate its features into some #ollama driven backend.

#llm #lrm

Jakub Jirutka 🇪🇺🇺🇦jakub@jirutka.cz
2025-06-08

Apple just proved AI “reasoning” models like Claude, DeepSeek-R1, and o3-mini don’t actually reason at all. They just memorize patterns really well.

The research revealed three regimes:
• Low complexity: Regular models actually win
• Medium complexity: “Thinking” models show some advantage
• High complexity: Everything breaks down completely

Most problems fall into that third category.

(Summary by Ruben Hassid)

machinelearning.apple.com/rese

#AI #LLM #LRM #aihype #Apple

2025-06-07

"complete accuracy collapse beyond certain complexities"

Interesting paper about the current limits of AI.

#AI #LLM #LRM

machinelearning.apple.com/rese

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst