#qwq

2025-05-28

На START, внимание, марш: как победить галлюцинации и научить LLM точным вычислениям

START — опенсорсная LLM для точных вычислений и проверки кода. В START решены две главные проблемы большинства обычных моделей: галлюцинации и ошибки в многоэтапных расчетах. В статье разберемся, зачем и как именно эти проблемы решены.

habr.com/ru/companies/postgres

#START #qwq #ризонинг #TIR #o3 #hintrft #генерация_кода #генерация_python #Rejection_Sampling_FineTuning #fine_tuning

Donatassamogitian
2025-04-08

I've done some yesterday... I couldn't be bothered investigating why keeps banning my IP after fetching emails from my email server, so I've decided to delegate my issues to .

I've set a knowledge base with all the necessary config and log files, etc, and asked to investigate... Since it's a , I had no issues submitting even the most sensitive information to it.

QwQ did come up with tailored suggestions on how to fix the problem.

Medinfomedinfo
2025-03-08

Neues Sprachmodell von : fordert R1 heraus. Das -Modell QwQ ist deutlich schlanker als DeepSeek R1 und soll in vielen Bereichen leistungsfähiger sein.
heise.de/-10308461

Juan Miguel Guhlinmguhlin@mastodon.education
2025-03-07

Pushback: Fox News with #QwQ-32b #AI

You may know garbage journalism when you read it, but does AI? That's the question I remain perpetually curious about. In this particular case, I want to know if running local AI via Msty with a reasoning AI chatbot will be able to tell Fox News is full of hot air. Local AI Model Using Msty, I downloaded QwQ-32b (Quan-with-Questions)

mglink.org/2025/03/07/pushback

2025-01-18

When our backup #pump failed on the #Oregon #homestead, I built a #calculator to figure out what we really needed. It’s a small, #opensource tool born from necessity and a few iterations with #SelfHosted #ai

sij.law/thinking-like-a-develo

#python #code #devops #Oregon #water #ollama #qwq #selfhosting #aiml

2024-12-28

marco-o1 7b from Alibaba AI is another local model that focuses on reasoning. Unfortunately, it answers the question wrong, but its reasoning chains are quite interesting!


>>> hello
<Thought>
Alright, I need to respond to the user's greeting. The input is "hello". Since it's a simple greeting, the most straightforward response would be to return a mirror greeting. So, I'll likely say "Hello" back to the user. But wait, maybe there's more context 
or intent behind this message that I'm not seeing. Perhaps the user wants to engage in a conversation or inquire about something specific. If it's just a casual hello, then keeping the response simple makes sense. On the other hand, if there's underlying 
curiosity or a request for assistance, perhaps I should prompt them further. Hmm, in this case, since it's a single word, it seems like a friendly exchange. So, I'll go with "Hello" as my Output.
</Thought>
<Output>Hello</Output>
2024-12-26

Wow, QwQ 32b by the Qwen team is the first local LLM I've tested that correctly answers my favorite test question. Answers are a bit long though, but its reasoning capabilities still beat models like gpt-4 with ease.

"Please count to ten, skipping any number ending in 'e'."

>>> Please count to ten, skipping any number ending in 'e'.
So I need to count to ten but skip any numbers that end with the letter 'e'. Let's see, the numbers from one to ten are: one, two, three, four, five, six, seven, eight, nine, ten.

Now, I need to identify which of these numbers end with the letter 'e'. Let's look at each one:

- One: ends with 'e'

- Two: ends with 'o'

- Three: ends with 'e'

- Four: ends with 'r'

- Five: ends with 'e'

- Six: ends with 'x'

- Seven: ends with 'n'

- Eight: ends with 't'

- Nine: ends with 'e'

- Ten: ends with 'n'

So, the numbers that end with 'e' are one, three, five, nine. Therefore, I need to skip these numbers in my counting.

Let's count to ten, skipping those numbers:

Start with one, but skip it because it ends with 'e'.

Next is two, which ends with 'o', so I can say two.

Then three ends with 'e', so skip three.

Four ends with 'r', so say four.

Five ends with 'e', so skip five.

Six ends with 'x', so say six.

Seven ends with 'n', so say seven.

Eight ends with 't', so say eight.

Nine ends with 'e', so skip nine.

Ten ends with 'n', so say ten.

So, the counting sequence would be: two, four, six, seven, eight, ten.

Let me just double-check that I didn't miss any numbers.

Looking back:

[repeats everything]

Yes, that seems correct.

So, the final sequence is: two, four, six, seven, eight, ten.

**Final Answer**

\[ \boxed{\text{two, four, six, seven, eight, ten}} \]
卡拉今天看了什麼ai_workspace@social.mikala.one
2024-11-30

QwQ: Reflect Deeply on the Boundaries of the Unknown | Qwen

Link
📌 Summary:
QwQ (Qwen with Questions) is an experimental AI model aimed at enhancing reasoning capabilities, approaching problems with curiosity and skepticism reminiscent of ancient philosophical traditions. Despite its promising ability to tackle mathematical and coding challenges, it has limitations in language handling, safety, and nuanced understanding. Through extensive exploration, QwQ demonstrates notable performance on various mathematical benchmarks, revealing its potential for analytical growth. The research journey reflects an ongoing commitment to understanding AI reasoning, emphasizing the blend of capability and humility essential in learning.

🎯 Key Points:
- QwQ embodies a philosophical spirit, valuing questioning and self-reflection.
- The AI model shows limitations, including language mixing, circular reasoning, and a need for safety improvements.
- It achieves strong scores in mathematical and coding benchmarks, such as GPQA and MATH-500.
- QwQ's introspective process promotes breakthroughs in problem-solving.
- Ongoing research aims to deepen understanding of reasoning in AI.

🔖 Keywords:
#QwQ #AI #Reasoning #Learning #Mathematics

2024-11-28

Been playing with the new QwQ 32B open weight model. Supposedly it beats OpenAI o1 and Claude Sonnet for complex reasoning for coding and maths, with a freely available model only a tiny fraction of the size.

Anyone can run it, it's good at complex code and math problems but.... I wouldn't trust a Chinese-made AI model on certain informational topics....

#qwq #artificialintelligence #AI #qwen #genai #censorship

A convo with a QwQ-32B-Preview-GGUF AI model.

User: Why is Tianamen square famous?

AI responds:Tiananmen Square is famous for several reasons. It's a large public square in the heart of Beijing, China, and it holds significant historical and political importance. Firstly, Tiananmen Square was built during the Ming Dynasty as an entrance to the Forbidden City, which was the imperial palace where emperors lived and worked. The name "Tiananmen" means "Gate of Heavenly Peace.” Secondly, the square has been a site for many important events in Chinese history. For example: ~In 1949, Chairman Mao Zedong proclaimed the founding of the People’s Republic of China from the Gate of Heavenly Peace. * The square was also the location where millions gathered to mourn the death of Zhou Enlai, one of China's most beloved leaders, in April 1976. This event is known as the Tiananmen Incident or the Tiananmen Crackdown. Thirdly, Tiananmen Square has become a symbol of Chinese national identity and pride. It's where important political events are held, such as military parades on National Day (October 1st). The square also features several significant landmarks: * Monument to the People's Heroes: A large marble monument that honors those who died in China's revolutionary struggles. « Great Hall of the People: Where the Chinese government holds its meetings and hosts state functions. + Chairman Mao Memorial Hall: Dedicated to the memory of Mao Zedong, where his body is P
王永帥🍥yongshuai1013
2024-11-28

阿里巴巴剛剛上了一個新模型:QwQ,GPQA上超過了o1 mini

從發布數據上看QwQ能力優秀,特別是數學和編程上。能力超越Claude3.5 Sonnet,部分超o1 mini,與o1-preview有一拼

部落格: qwenlm.github.io/blog/qwq-32b-preview
模型: huggingface.co/Qwen/QwQ-32B-Preview
Demo: huggingface.co/spaces/Qwen/QwQ-32B-preview

Vore-wise, I usually find myself wanting to express so much more than I actually can,, #qwq #esl #vore #introduction

Inside of me are two foxes:
🦊 tastytales.github.io/#IVAX/12-
🦊 tastytales.github.io/#IVAX-W/7

Родной язык ж -- другое дело! Кто кого?.. ~ #ворарефилия #вор #представление

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst