Best AI News — Updated Every 3 Hours
Story Page
← All Stories
Home Community Story
Community

I haven't experienced Qwen3.5 (35B and 27B) over thinking. Posting my settings/prompt

Via r/LocalLlama
Sunday, Mar 22, 2026 · 8:01PM
Summary

I felt the need to make a post about these models, because I see a lot of talk about how they think for extended periods/get caught in thinking loops/use an excessive amount of reasoning tokens. I have never experienced this. In fact, I've noticed the opposite - I have been singularly impressed by h

Continue reading the full article
Read at r/LocalLlama
www.reddit.com
Do you want to build a robot snowman?
TechCrunch AI · Industry & Money
Lossy self-improvement
Interconnects · Models & Research
Crimson Desert dev apologizes for use of AI art
The Verge AI · Industry & Money
Back to all stories