#LocalLLM

Ronnie Rocharonnierocha
2025-07-04

I got Xcode's native coding assistant working with my local AI models using Ollama. Apple made it surprisingly easy! 👏 I wrote down the simple steps if you want to try it too. 👨‍💻

ronnierocha.dev/blog/connectin

Nicolas MOUARTsilentexception
2025-07-04

Errata: the NVIDIA RTX A2000 6GB can reach up to 200 to/s ! On an 1.5B model. Not bad, @ <70W for the GPU, and less than 140W total for the build (old deprecated HW), given that this kind of useless benchmark is promoted everywhere by 'pro"/-paid- tech enthusiasts.

benchmarks
2025-06-16

Comme l'IA débarque dans nos vies qu'on le veuille ou non, je débute un projet perso qui consiste à mettre en place une "IA" qui tourne sur ma propre machine et consomme le moins de ressources possible (relativement aux IA mainstream). J'ai déjà quelques résultats intéressants pour gérer une todolist et d'autres tâches simples.

Je suis certain que d'autres projets existent mais ils sont difficiles à trouver. Si vous avez des liens, je suis preneur 🙂

#LocalLLM #LocalAI #SelfHosting #Privacy

➴➴➴Æ🜔Ɲ.Ƈꭚ⍴𝔥єɼ👩🏻‍💻AeonCypher@lgbtqia.space
2025-05-21

So, with #copilot #OpenSource I can't wait for one easily run off #ollama or other #LocalLLM.

2025-05-21

@system76
I love #LLM, or as they're often called, #AI, especially when used locally. Local models are incredibly effective for enhancing daily tasks like proofreading, checking emails for spelling and grammatical errors, quickly creating image descriptions, transcribing audio to text, or even finding that one quote buried in tons of files that answers a recurring question.

However, if I wanted to be fully transparent to #bigtech, I would use Windows and Android with all the "big brotherly goodness" baked into them. That's why I hope these tools don't connect to third-party servers.

So, my question to you is: Do you propose a privacy-oriented and locally/self-hosted first LLM?

I'm not opposed to the general notion of using AI, and if done locally and open-source, I really think it could enhance the desktop experience. Even the terminal could use some AI integration, especially for spell-checking and syntax-checking those convoluted and long commands. I would love a self-hosted integration of some AI features. 🌟💻
#OpenSource #Privacy #AI #LocalModels #SelfHosted #LinuxAI #LocalLLM #LocalAI

N-gated Hacker Newsngate
2025-05-06

💾🤖 Ah, , the Picasso of pointless 90s nostalgia! Because nothing screams "cutting-edge local LLM" like a paperclip with an inferiority complex. 🎨🖼️ Let's all applaud the avant-garde genius of recreating software that everybody already hated! 🙄👏
felixrieseberg.github.io/clipp

Fabian (Bocchi) 🏳️‍🌈fabiscafe@mstdn.social
2025-05-01

If I could use local LLM to generate hashtags, I surely would use it.
#LLM #LocalLLM #AI #MachineLearning #NaturalLanguageProcessing

Dr. Fortyseven 🥃 █▓▒░fortyseven@defcon.social
2025-04-27

So, I had an idea while messing with #LocalLLM geolocation with #Gemma3.

I fed it this photo from Wikipedia of Governors Island in NY.

Unsurprisingly, it figured it out immediately.

> Considering all the evidence, I am confident that the image depicts Governors Island in New York City. The shape of the island, the surrounding skyscrapers, the port facilities, and the presence of the Verrazano-Narrows Bridge all point to this location.

Fair enough. That's baby stuff at this point. ;)

But then I had an idea: "Assuming this refined guess is correct, where might the photo have been taken FROM?"

The ol' geolocate the camera trick. (Usually finding a CCTV or something, in some cases.)

> Based on the clarity of the view of Governors Island, the partial obstruction of other buildings, and the overall angle, I believe the photo was most likely taken from One World Trade Center. The height and location provide the perfect vantage point for this shot.
>
> Specifically, I'd estimate the photo was taken from the *observation deck or a high-floor office within One World Trade Center, likely facing southeast towards Governors Island and the Verrazano-Narrows Bridge*.

And uh... well, yeah probably. :D

Though I don't expect nearly as much success on other photos. I'll keep trying and see what I get.

[cc: @simon]

#llm #osint

Arda Kılıçdağıarda@micro.arda.pw
2025-04-25

Then I installed Ollama and pulled Qwen from there — agent mode worked right away.

But when I tried Deepseek again through Ollama, it still said agent mode was unsupported.

I couldn’t figure out the difference — kinda confused.
What are you using locally, and how’s it working for you?

#LLM #LocalLLM #Qwen #DeepSeek #Ollama #AI #AgentMode #MachineLearning #OpenSourceAI

N-gated Hacker Newsngate
2025-04-21

🚀 Local LLM inference: where meets in a glorious code spaghetti that no sane developer wants to untangle. 🎉 It's like building a rocket 🚀 only to realize you forgot the launchpad—works great in theory but crashes spectacularly in the real world. 🌎🔧
medium.com/@aazo11/local-llm-i

Donatassamogitian
2025-04-08

I've done some yesterday... I couldn't be bothered investigating why keeps banning my IP after fetching emails from my email server, so I've decided to delegate my issues to .

I've set a knowledge base with all the necessary config and log files, etc, and asked to investigate... Since it's a , I had no issues submitting even the most sensitive information to it.

QwQ did come up with tailored suggestions on how to fix the problem.

the_tech_whisperernoahjstreet
2025-04-05

I ditched ChatGPT Plus and built my own AI assistant on a forgotten Windows server.
100% offline. No API. No rate limits.
Flask + React + Ollama + Mistral.
It powers our IT helpdesk, learns from real tickets, and costs me nothing.

Full write-up drops Tuesday.
Want the stack early? Hit me up.

2025-03-23

Its going way to fast.. hard to keep up

#ai #LocalLLM #tech #technology #meme

2025-03-23

Just learned about the RTX PRO 6000 - a GPU so powerful, it probably needs its own power station! 💥

With 96GB of GDDR7 memory, it's like having a supercomputer in your desktop!

Perfect for those who want to make AI and graphics weep with joy!

Bitcoin: BC1QY6PRA087YVF3FNG5F8VN466HV9VM6Y7CSL2M8S - Saving up for a rig that'll make Skynet look like a pocket calculator..

#ai #LocalLLM #localllama #nvidia #bitcoin

2025-03-17

How to use a #LocalLLM to assist with the process of writing a story:

1. Have an idea
2. Type a description of that idea complete enough to be a half finished story into an LLM
3. Watch it execute the remaining half of that idea so badly (plus mutilate some of the half that you gave it) that you are inspired to do it better out of sheer spite
4. Throw away what the LLM wrote and write the passage the old fashioned way
5. Publish LLM assisted work

I am not being ironic btw. Why does this actually work for me

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst