Mom, can I have an #RTX5090 to do #cuda and have my own #privacymatters #LLM_agent ?
We have #Nvidia at home.
Mom, can I have an #RTX5090 to do #cuda and have my own #privacymatters #LLM_agent ?
We have #Nvidia at home.
I've seen the light of MCP. Well, not the protocol itself. My understanding is it is pretty janky, and I don't need to be an expert to see the context injection threat it represents.
But I have Claude desktop rigged with local memory, filesystem, shell tools, and a behavioral correction rule system, and it is pretty slick! Next I want to try it with Ollama, although I doubt any model I can run locally will handle the context overhead.
How to vibe code for free: Running Qwen3 on your Mac, using MLX
https://localforge.dev/blog/running-qwen3-macbook-mlx
#ycombinator #Qwen3 #MLX #macOS #Apple_Silicon #Local_LLM #Localforge #Free_Code_Generation #Ollama #Local_AI #LLM_Agent