I find myself explaining this to people every time there is conversation over search engines and their included LLM’s. I wanted to make a single post instead of just typing the same thing over and over.
Kagi, the search engine company, offers a premium model for their search and LLM models. The price starts at around $5/mo and goes up to $25/mo if you want all the bells and whistles, including the more advanced LLM stuff. Don't get me wrong, it is a good search, one of the best in fact. They claim to “stop the slop” by filtering out (as much as they can) the LLM generated garbage that usually rises to the top in a typical Google search. The irony is they also offer LLM’s for you to use. Here are my thoughts on the irony:
The core issue with Kagi's position is not that they offer both “SlopStop” and LLM assistants. It is instead they're simultaneously policing and proliferating the same technology, and that contradiction cannot be resolved through clever product segmentation.
“SlopStop” targets "low-quality, mass-generated LLM content designed to manipulate ranking.” But Kagi Assistant gives users access to 30+ LLMs from OpenAI, Anthropic, Google, Meta, and others, with customizable instructions and web access. The distinction between "personal use" and "public slop" is artificial and every content farm operator starts with a single LLM assistant generating text. Kagi cannot control what users do with the content they create. The moment LLM-generated text leaves their platform and gets published anywhere, it becomes part of the slop ecosystem they're trying to filter.
Kagi's own documentation states AI should "enhance, not create or replace" human intelligence. Yet their Assistant is explicitly designed to create: "write essays, code, emails, and more.” They've simply segregated the problem. They hide the slop in search results while selling the slop-production tools in a different tab. This isn't a principled stance; it's premium arbitrage. They down rank external pollution while providing the infrastructure to generate it, then charge you for the privilege of filtering out what their own tools helped make inevitable.
Kagi acknowledges that content farms exploit LLM’s "at massive scale.” But scale is scale, regardless of intent. A thousand users generating blog posts with Kagi Assistant creates the same problem as one content farm. The difference is Kagi profits from the former through subscriptions while claiming to fight the latter through community reporting. They're not solving the slop problem but instead they are monopolizing it.
It really is not irony. It is a business model that benefits from both sides of the “AI” equation, wrapped in the language of user empowerment and community moderation. The best search experience in the world doesn't change the math that you can't credibly fight AI slop while democratizing the means of its production.
Stop letting these companies profit off the use of LLM's. Stop supporting companies who push it, make it easy to use, or even say they are fine with it. It is bad for you and me and the rest of society, and the environment. These very companies who entwine themselves into the current "AI" bubble (also see Mozilla) instead of sticking with what they are good at and what makes them unique are the very ones who will crash and bankrupt when the bubble finally bursts.
:NoAI: :SystemChangeNotClimateChange:
Sources:
“Introducing SlopStop” - https://blog.kagi.com/slopstop
“SlopStop Kagi Help” - https://help.kagi.com/kagi/features/slopstop.html
“Kagi's AI Integration Philosophy” - https://help.kagi.com/kagi/why-kagi/ai-philosophy.html
Edit: typo
#Kagi #search #searchengine #ai #llm #Google #Gemini #openai #chatgpt #slop #stoptheslop #SlopStop #environment #environmental #environmentalism #ecosocialism #socialism #AntiCapitalism