Making the most out of a small LLM
Yesterday i finally built my own #AI #server. I had a spare #Nvidia RTX 2070 with 8GB of #VRAM laying around and wanted to do this for a long time.
The problem is that most #LLMs need a lot of VRAM and i don't want to buy another #GPU just to host my own AI. Then i came across #gemma3 and #qwen3. Both of these are amazing #quantized models with stunning reasoning given that they need so less resources.
I chose huihui_ai/qwen3-abliterated:14b since it supports #deepthinking, #toolcalling and is pretty unrestricted. After some testing i noticed that the 8b model performs even better than the 14b variant with drastically better performance. I can't make out any quality loss there to be honest. The 14b model sneaked in chinese characters into the response very often. The 8b model on the other hand doesn't.
Now i've got a very fast model with amazing reasoning (even in German) and tool calling support. The only thing left to improve is knowledge. #Firecrawl is a great tool for #webscraping and as soon as i implemented websearching, the setup was complete. At least i thought it was.
I want to make the most out of this LLM and therefore my next step is to implement a basic #webserver that exposes the same #API #endpoints as #ollama so that everywhere ollama is supported, i can point it to my python script instead. This way it feels like the model is way more capable than it actually is. I can use these advanced features everywhere without being bound to it's actual knowledge.
To improve this setup even more i will likely switch to a #mixture_of_experts architecture soon. This project is a lot of fun and i can't wait to integrate it into my homelab.
#homelab #selfhosting #privacy #ai #llm #largelanguagemodels #coding #developement