I've been working more on my local homelab LLM project. It runs gpt-oss:20b as its base model in conjunction with a RAG system (BM25+embeddings) accessed with a custom web UI. Nothing groundbreaking there. What's exciting for the geek in me is how the system uses what I call policy-driven RAG. Spoiler, no LLM is truly rule based. What my system does is uses a file called other_topics.txt to determine if model output comes solely from RAG data (the default), from general gpt-oss:20b model data (aka other topics), or a hybrid mode. Does it scale to larger deployments? I have no idea. But it's very homelab geek cool. #opensource #localllm #pangeahillsai #openai #gpt-oss #homelab





