#BoltAI

2025-05-27

then you have little Applets and Utilities like swiss army knives for chat and tools that can use local compute LLMs or cloud foundation models as well. #SetApp has one i use called #BoltAI and it's exactly what i said - a swiss army knife. image classification, search, prompt library of all manner of tasks you might need to quickly get a query and response from whatever models you use.

considering it's included in my #setApp subscription it was a slam dunk and i use it often.

#macOS #AI
2/2

this is BoltAI's chat view with history. you can group them by topic or project, in this screenshot i'm reviewing a conversation i had with (o3? gemini 2.5?) a foundation model about my project to let people create their own curriculum for ASAT if they're using psychedelics or ketamine for major depressive disorder. in multiple published studies this exercise extends the "halo" of neuroplasticity another 30 days and i think it could help a lot of people in my field.boltAI has some commands for quick one-shot actions and operations like "summarize this youtube video" (uses captions) or "rephrase this" or "what the fuck is this" and an image, or attach some PDFs and say "what does all of this even mean bruh" etc.boltAI has a prompt library, and a commands/actions roster but it also has Assistants which chatbox calls CoPilots, ChatGPT calls "GPTs" (right? i only use their API) they're not like sillytavern character cards but the end result is similar, this is the persona system prompt for when i think a situation calls for the 10th Doctor, which is pretty often. this is an abbreviated version not the one that talked me down from a dreaded email in Gemini ;)of course it's me using it so it has to support local compute and inference. i run a few LLMs in the house full time and as needed, and in this screenshot i'm rolling down the list of models available to me on my ollama server. there's like 90 in there at the moment, many of them i haven't spent much time with yet.

then i have another 30-40 in LM Studio which exposes the moe=<count> in a nice way that is retained so for all my mixture-of-experts models with experts > 2 are served by LM Studio because i don't know how to change parameters for number of experts in ollama >< some of the models are named after specific projects or purposes cuz i built them and i obfuscated the names.

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst