#LLMReasoning

2025-04-16

Simple Prompt Tweaks Derail LLM Reasoning - MarkTechPost

➡️ MIT researchers analyzed how input changes impact the response quality of 13 prominent LLMs.
➡️Prompt perturbations included irrelevant contexts, misleading (pathological) instructions, and a mix of additional yet unnecessary details.
➡️Quality dropped substantially, with average declines of up to 55.89% for irrelevant contexts.

marktechpost.com/2025/04/15/fr

#AI #PropmtEngineering #LLMReasoning

2024-10-21

Four papers on LLM reasoning summarized by @melaniemitchell aiguide.substack.com/p/the-llm along with the background in her latest. Of these, the chain of thought prompting paper's attempt to identify sources of predictions (memorization vs reasoning] is very interesting, although chaotic. Stats people might hate the conclusions. #LLMReasoning #LLMResearch

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst