What is tech doing to our minds?
#Tech #Technology #Computers #Internet #Gadgets #Ideas #New #Cool #Futurism #Future #ArtificialIntelligence #AI #ML #MachineLearning #Learning #ChatGPT #OpenAI #Llama #Ollama #LMStudio #HuggingFace
What is tech doing to our minds?
#Tech #Technology #Computers #Internet #Gadgets #Ideas #New #Cool #Futurism #Future #ArtificialIntelligence #AI #ML #MachineLearning #Learning #ChatGPT #OpenAI #Llama #Ollama #LMStudio #HuggingFace
Accessing LM Studio Server from WSL Linux
(Not complicated, just tricky to find the settings)
https://ingo.kaulbach.de/accessing-lm-studio-server-from-wsl-linux/
There’s Proof! AI Makes You Stupid. https://indubitablyodin.medium.com/theres-proof-ai-makes-you-stupid-55b08486efee
#ArtificialIntelligence #AI #ML #MachineLearning #Learning #ChatGPT #OpenAI #Llama #Ollama #LMStudio #HuggingFace #Tech #Technology #Computers #Internet #Gadgets #Ideas #New #Cool #Futurism #Future
Your #1 Ethical AI List https://medium.datadriveninvestor.com/your-1-ethical-ai-list-7d306816e6f2
#ArtificialIntelligence #AI #ML #MachineLearning #Learning #ChatGPT #OpenAI #Llama #Ollama #LMStudio #HuggingFace #Tech #Technology #Computers #Internet #Gadgets #Ideas #New #Cool #Futurism #Future #Politics #Political #Ideas #Philosophy #World
Skepticism Made Me an AI Expert https://indubitablyodin.medium.com/skepticism-made-me-an-ai-expert-3339f62219ba
#ArtificialIntelligence #AI #ML #MachineLearning #Learning #ChatGPT #OpenAI #Llama #Ollama #LMStudio #HuggingFace #Tech #Technology #Computers #Internet #Gadgets #Ideas #New #Cool #Futurism #Future
What does your local #LLM setup for software #development look like? I am on a M4 Pro MacBook with 24GB RAM. I can only use local LLMs with #Ollama or #LMStudio running with either #JetBrains #AI (but they do not support local LLMs for code completion) or the continue plugin.
At the moment I use qwen2.5-coder:7b with 4bit quantization for autocompletion, Phi4 or Llama3.1 8B for chatting and nomic-embed-text for embedding. Suggestions? :mastodon:
Как запустить локальную LLM (AI) в Android Studio
Привет! Если вы мобильный разработчик и следите за AI-трендами, наверняка задумывались о том, как интегрировать языковые модели (LLM) в свои приложения прямо из Android Studio. В этой статье я расскажу, как это можно сделать быстро и просто, не полагаясь на внешние API и облачные решения.
@sam4000 assistant: Ach, Hamburg! Es ist wie ein bisschen traurig, aber auch irgendwie gemütlich. Teilweise bewölkt, also nicht die Sonne, aber es ist trotzdem angenehm warm mit 57 Grad. Der Wind kommt aus Nordosten und pfeift ganz sanft um die Ecke – perfekt für einen Spaziergang mit einem Tee!
There’s Proof! AI Makes You Stupid. https://indubitablyodin.medium.com/theres-proof-ai-makes-you-stupid-55b08486efee
#ArtificialIntelligence #AI #ML #MachineLearning #Learning #ChatGPT #OpenAI #Llama #Ollama #LMStudio #HuggingFace #Tech #Technology #Computers #Internet #Gadgets #Ideas #New #Cool #Futurism #Future
Your #1 Ethical AI List https://medium.datadriveninvestor.com/your-1-ethical-ai-list-7d306816e6f2
#ArtificialIntelligence #AI #ML #MachineLearning #Learning #ChatGPT #OpenAI #Llama #Ollama #LMStudio #HuggingFace #Tech #Technology #Computers #Internet #Gadgets #Ideas #New #Cool #Futurism #Future #Politics #Political #Ideas #Philosophy #World
Skepticism Made Me an AI Expert https://indubitablyodin.medium.com/skepticism-made-me-an-ai-expert-3339f62219ba
#ArtificialIntelligence #AI #ML #MachineLearning #Learning #ChatGPT #OpenAI #Llama #Ollama #LMStudio #HuggingFace #Tech #Technology #Computers #Internet #Gadgets #Ideas #New #Cool #Futurism #Future
下載速度不錯 😎
Hongkiat: Running Large Language Models (LLMs) Locally with LM Studio. “Running large language models (LLMs) locally with tools like LM Studio or Ollama has many advantages, including privacy, lower costs, and offline availability. However, these models can be resource-intensive and require proper optimization to run efficiently. In this article, we will walk you through optimizing your […]
I can't get it. I've spent all evening trying to get #SillyTavern to detect #Mythomax in #LMStudio, but it won't.
So I'm using Mythomax in LM Studio, and it's shit. It's badly broken and crappy. It gets the narrative tense wrong, even in the same sentence, and it devolves to mindless repetition with no room for input. A nightmare.
#ChatGPT was light-years beyond this, but now they ban "AI relationships," censoring creative writing for consenting adults. #LLM
Cohere Command — революция, которую мы пропустили
🔪 Карусель триальных токенов под ребро облачному LLM провайдеру Облачный LLM провайдер Cohere предоставляет бесплатно 20 запросов в минуту без проверки кредитной карты. Я просто не смог отказаться от задумки сделать веселую карусель)
https://habr.com/ru/articles/893232/
#typescript #javascript #python #lmstudio #ollama #llm #openai #cohere #искусственный_интеллект #машинное_обучение
Not sure if you have noticed it: Google has released Gemma 3, a powerful model that is small enough to run on normal computers.
https://blog.google/technology/developers/gemma-3/
I've done some experiments on my Laptop (with a Geforce 3080ti), and am very impressed. I tried to be happy with Llama3, with the Deepseek R1 distills on Llama, with Mistral, but the models that would run on my computer were not in the same league as what you get from ChatGPT or Claude or Deepseek remotely.
Gemma changes this for me. So far I let it write 3 smaller pieces of Javascript, analyze a few texts, and it performed slow, but flawlessly. So finally I can move to a "use the local LLM for the 90% default case, and go for the big ones only if the local LLM fails".
This way
- I use far less CO2 for my LLM tasks
- I am in control of my data, nobody can collect my prompts and later sell my profile to ad customers
- I am sure the IP of my prompts stay with me
- I have the privacy to ask it whatever I want and no server in the US or CN has those data.
Interested? If you have a powerful graphiccs card in your PC, it is totally simple:
1. install LMStudio from LMStudio.ai
2. in LMStudio, click Discover, and download the Gema3 27b Q4 model
3. Chat
If your graphics card is too small, you might head for the smaller 12b model, but I can't tell you how well it performs.
Did a few coding experiments with Gemma 3 local on lmstudio. So far it performs flawless (in terms of capability - on my lowly Geforce 3080ti it is fairly slow, something like 5 tokens per second). But I've got time, and it is mine, running locally, no Billionaire's corporation sees my prompts.
For me (privacy nut) this is a big thing, not having to use ChatGPT for everything.