In case anyone wants to run their own #LLM on their personal devices, I can suggest #lmstudio It allows you to download any major model that's out there to your own laptop/desktop, and then you can use it's Graphical Interface to interact with it in a familiar interface environment, as you would #Claude or #ChatGPT or #Gemini. It will also let you know if your device has enough resources (power) to run a specific model variant: https://lmstudio.ai/
For those of you who love the Terminal (aka: Command Line Interface or #CLI), check out https://ollama.com/
For starters, I suggest the #gemma3n model (works great on tablets, laptops, or phones), or #llama3.1 for the most common interactions. While most of us geeks will have very powerful personal systems, or servers in our home racks, most people do not have high-end systems/devices. The gemma3n model is lightweight, very powerful, and a solid general purpose LLM.
What's the benefit of running LLM's locally? #Privacy is a big one (it's running on your local machine, not a cloud server) - so you can ask it questions against sensitive business data, PHI/PII, etc.. You can also run it #offline (no Internet connection required) so if you wanted to #airgap your interactions, or play with it on vacation (on a plane, lost in the back country, etc.), you can absolutely do that - even with the Deepseek model.
I started playing with both LM Studio and ollama myself - been asking models basic questions like "what is the capital of Italy" and also more complex questions like "write me a #powershell script to add users of a specific OU to a number of Security Groups within AD" and so far, it's been very accurate. The PowerShell script llama3.1 provided worked out of the box (after I revised variables to match my environment).