#Cot

2025-12-21

Dự án huấn luyện AI lớn đang bắt đầu tích hợp dữ liệu suy luận kiểu Chain-of-Thought (CoT) vào tập luyện. Điều này giúp mô hình hiểu sâu hơn các bước logic, nâng cao khả năng giải quyết vấn đề. Nguồn: [Reddit]([link]) #AINN #HocMay #CoT #KhoaHocDuLieu #AIResearch #MachineLearning #NeuralNetworks #TuitionLearning

reddit.com/r/LocalLLaMA/commen

Công ty đèn Led | HALED STOREcongtydenled
2025-12-08

Cột đèn chiếu sáng sân vườn là giải pháp chiếu sáng và trang trí không gian ngoại thất như sân vườn, công viên, lối đi, khu nghỉ dưỡng. Sản phẩm có thiết kế tinh tế, chiều cao thấp từ 2–4m, đa dạng mẫu mã, chất liệu bền bỉ.

Xem ngay: congtydenled.com.vn/cot-den-ch
Liên Hệ:
Hotline 0332599699
Địa chỉ: Số 3D2, KDT Cầu Diễn, Bắc Từ Liêm, Hà Nội
Website: congtydenled.com.vn/

ty đèn LED
-den-chieu-sang
đèn chiếu sáng

-den-chieu-sang-san-vuon

Cột đèn chiếu sáng sân vườn cao cấp
2025-11-08

"Phương pháp LEASH (Logit-Entropy Adaptive Stopping Heuristic) giúp tối ưu hóa quá trình suy luận Chain-of-Thought (CoT) trong các mô hình ngôn ngữ. Thay vì tạo giải thích dài, LEASH dừng lại khi xác suất token và cải thiện top-logit ngừng thay đổi, tiết kiệm 30-35% token và 27% thời gian. Tuy nhiên, độ chính xác giảm ~10%. Phù hợp với mô hình được tinh chỉnh. #AI #ML #NLP #HiệuSuất #HàngĐầu #MachineLearning #SuyLuận #CoT"

reddit.com/r/singularity/comme

Wikimédia Portugalwikimediapt@masto.pt
2025-09-22

📸 Aí está parte do COT – Core Organizing Team do GLAM Wiki 2025, reunido em Lisboa para preparar todos os detalhes deste grande encontro internacional! 🌍

De 30 de outubro a 1 de novembro com:
🎤 Painéis e apresentações
🛠️ Oficinas práticas
💻 Hackathon
🧭 Sessões de estratégia
🏛️ Tours culturais

👉 Nota: as inscrições terminam a 30 de setembro.

#GLAMWiki2025 #Lisboa #COT #GLAM #Wikimedia #WikiLovers #WikimediaPortugal #WMPT #OpenKnowledge #OpenCulture

COT – Core Organizing Team do GLAM Wiki 2025, reunido em Lisboa para preparar todos os detalhes deste grande encontro internacional!
2025-08-20

[Перевод] LLM и их хрупкая логика: новое исследование ставит под сомнение Chain-of-Thought

Новое исследование учёных из Университета штата Аризона показывает: знаменитое «цепочечное рассуждение» (Chain-of-Thought, CoT) в больших языковых моделях (LLM) скорее похоже на «хрупкий мираж», чем на проявление подлинного интеллекта. Эта работа продолжает традицию критического анализа глубины рассуждений LLM, но в отличие от предыдущих исследований предлагает уникальный взгляд через призму «распределения данных», который позволяет понять, где и почему CoT систематически даёт сбой.

habr.com/ru/companies/technokr

#большие_языковые_модели #искусственный_интеллект #ai #llm #cot #chain_of_thoughts

Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2025-08-20

How can #AI "reasoning" models be more efficient?

Liu trained them on only chains-of-thought (CoTs) — no prompts.

This seemed to teach models
- #CoT reasoning
- conditions that trigger longer CoTs
- default to shorter CoTs in other conditions

🔓doi.org/10.48550/arXiv.2506.12

#CogSci

Methods 1 of 2: Question-free fine-tuning (QFFT)Methods 2 of 2: Question-free fine-tuning (QFFT)Accuracy-cost tradeoffs (see "Acc" and "tokens" in the tables).Conditions that trigger longer chain of thought reasoning by mathematical problem difficulty (1-5)
2025-08-16

Just in case you were holding your breath for LLMs that had "reasoning" or "chain of thought" to let them "think" more logically, you might want to consider how the AI companies are going about this.

Which path do you think they've pursued?
(a) link the power of LLM natural language skills with the existing non-generative AI problem-solving, path-exploring, optimization tools, or
(b) get their LLMs to produce a set of steps that statistically looks like it would be a chain of thought, interpolating the already collected training material -- an approach that breaks as soon as the problem to be broken down can't be recognized in the training data?

youtube.com/watch?v=TSaiuXPe6cw

Asking a chatbot to explain its reasoning (it can't, it just sounds like it) or explain its errors (it can't, it just sounds like it) will produce convincing but completely fabricated text. Asking a chatbot to break a problem down into a series of logical steps and then solve them will only work when there are close enough examples of very similar problems in its training. It can adopt the form and the prosaic features in its answers, but it can't (yet) apply logic.

Maybe some time in some unlimited-power future, the depth of learning in a huge model will abstract and embed the structure of logic, reasoning, analogy, induction, probability etc., from some body of human writing, but it appears (to me) almost impossible to extract that from any existing corpus. The currently available training material has a wealth of examples of simple logic and way too many examples of bad reasoning.

If you know of researchers that are following path (a) above with any success, let me know. I think that's the only way we'll get any useful explorative, extrapolative, less-brittle AI.

#AI #CoT #LLM

2025-08-15

#Youtube #Video: #AI ‘chain of thought’: still just a ‘brittle mirage’
youtube.com/watch?v=TSaiuXPe6cw
#LLM #reasoning #COT

Alexander Gerberagerber@troet.cafe
2025-08-11

Die #KI befindet sich mit seiner #Denkfähigkeit per #CoT also auf der höhe der meisten öffentlichen Diskurse:
"Dieses Ergebnis sei sprachlich plausibel, aber logisch inkonsistent."
the-decoder.de/naechste-studie

Xamanismo Coletivoeliasulrich@hachyderm.io
2025-08-09

Is #chainofthought #Reasoning of #LLMs a Mirage?

"... Our results reveal that #CoT reasoning is a brittle mirage that vanishes when it is pushed beyond training distributions. This work offers a deeper understanding of why and when CoT reasoning fails, emphasizing the ongoing challenge of achieving genuine and generalizable reasoning.

... Our findings reveal that CoT reasoning works effectively when applied to in-distribution or near
in-distribution data but becomes fragile and prone to failure even under moderate distribution shifts.
In some cases, LLMs generate fluent yet logically inconsistent reasoning steps. The results suggest that what appears to be structured reasoning can be a mirage, emerging from memorized or interpolated patterns in the training data rather than logical inference.

... Together, these findings suggest that LLMs are not principled reasoners but rather sophisticated simulators of reasoning-like text."

The image displays a research paper titled "Is Chain-of-Thought Reasoning of LLMs a Mirage? A Data Distribution Lens," authored by Chengshuai Zhao and others. The document discusses the effectiveness of Chain-of-Thought (CoT)
Nick Byrd, Ph.D.ByrdNick@nerdculture.de
2025-08-01

Do #AI models perform better as they do more chain-of-thought (#CoT) reasoning?

When is more reasoning no longer worth it?

This paper finds near-optimal results when CoT reasoning terminates ... almost immediately?

More reason to think CoT's overrated?

doi.org/10.48550/arXiv.2505.15

"Figure 4: Accuracy of DeepSeek-R1-Distill-Qwen-7B vs. position where </think> is inserted. ... even early [termination] already yield hidden states highly similar to the final one—supporting the view that most useful reasoning content is distilled early, and extended CoT traces incur diminishing returns.""Table 1: Comparison of our ThinkLess and DeepSeek-R1 distilled models."

ThinkLess beat the distilled models in fewer than half of cases, revealing opportunity for improvement, but not necessarily a superior method.
Michael Fauscettemfauscette@techhub.social
2025-07-26

Anthropic researchers discover the weird AI problem: Why thinking longer makes models dumber
zurl.co/IeLIu
#ai #agenticai #cot

gtbarrygtbarry
2025-07-18

Research leaders urge tech industry to monitor AI’s ‘thoughts’

A key feature of AI reasoning models are their chains-of-thought or CoTs — an externalized process in which AI models work through problems, similar to how humans use a scratch pad to work through a difficult math question. Reasoning models are a core technology for powering AI agents

techcrunch.com/2025/07/15/rese

2025-07-16

#AI systems that reason in natural language can expose their intentions, making Chain-of-Thought (#CoT) monitoring a promising safety lever

🔗 arxiv.org/abs/2507.11473

#ResponsibleAI

When many of the most influential AI researchers agree on something, it’s worth paying attention - like this paper on Chain-of-Thought monitorability: a fragile but vital path to AI safety. Imagine spotting misbehaviour before it happens. Transparency matters. #AI #ChainOfThought #CoT #AISafety

Chain of Thought Monitorabilit...

2025-07-11

Mastering Reasoning with Code: Chain of Thought Pipelines in Java
Build smarter, traceable AI workflows using LangChain4j, Quarkus, and structured step-by-step logic
myfear.substack.com/p/chain-of
#Java #Quarkus #LangChain4j #CoT #Reasoning

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst