#LocalLlama

2025-07-07

New, promising MoE model "Hunyuan" by Tencent

lemmy.ml/post/32831762

Are you using any MCP servers? If so, what?

lemmy.ca/post/46905423

2025-06-27
2025-06-25

Do you quantize models yourself?

lemmy.ml/post/32231409

Any recomendations for an text editor with good AI integration (not code editor)

lemmy.world/post/31850136

Local Voiceover/Audiobook generation

infosec.pub/post/29845011

2025-06-11

Mistral releases Magistral, official mistral small reasoning model

lemmy.world/post/31175078

Current best local models for tool use?

lemm.ee/post/66465951

I'm excited for dots.llm (142BA14B)!

sh.itjust.works/post/39817345

2025-06-08

Updated guidelines for c/LocaLLama (new rules)

lemmy.world/post/31038465

AMD’s Untether AI Deal - Bad Signs for GPU-Driven AI training

aussie.zone/post/21339438

My AI Skeptic Friends Are All Nuts

aussie.zone/post/21200045

Despite record gaming revenues, Nvidia reportedly plans to cut RTX 50 series production to allocate it to new AI hardware

lemmy.ml/post/30982488

Noob experience using local LLM as a D&D style DM.

sh.itjust.works/post/39139782

Don't overlook llama.cpp's rpc-server feature.

sh.itjust.works/post/39137051

LLMs and their efficiency, can they really replace humans?

lemm.ee/post/65523085

2025-05-29

Deepseek just released r1 distilled qwen3 models, boast improved performance over original models

lemmy.world/post/30442991

2025-05-28

A gallery that showcases on-device ML/GenAI use cases and allows people to try and use models locally on your phone. APK

feddit.it/post/18063918

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst