#LLM_agent

2025-06-10

I've seen the light of MCP. Well, not the protocol itself. My understanding is it is pretty janky, and I don't need to be an expert to see the context injection threat it represents.

But I have Claude desktop rigged with local memory, filesystem, shell tools, and a behavioral correction rule system, and it is pretty slick! Next I want to try it with Ollama, although I doubt any model I can run locally will handle the context overhead.

#AI #mcp #llm_agent #local_llm

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst