Best AI News — Updated Every 3 Hours
Story Page
← All Stories
Home Community Story
Community

Anyone using Tesla P40 for local LLMs (30B models)?

Via r/LocalLlama
Wednesday, Mar 25, 2026 · 1:40AM
Summary

Hey guys, is anyone here using a Tesla P40 with newer models like Qwen / Mixtral / Llama? RTX 3090 prices are still very high, while P40 is around $250, so I’m considering it as a budget option. Trying to understand real-world usability: how many tokens/sec are you getting on 30B models? is it usabl

Continue reading the full article
Read at r/LocalLlama
www.reddit.com
Back to all stories