Unboxing my new Linux workstation.
#unboxing #workstation #pc #lenovo #thinkstation #p3 #linux #opensuse #tumbleweed #opensource #intel #corei9 #RTX3060 #12gbvram #digitalsouveranitat #digitalsovereignty #generativeAI #flux #fluxai #ForgeUI #sdxl
Unboxing my new Linux workstation.
#unboxing #workstation #pc #lenovo #thinkstation #p3 #linux #opensuse #tumbleweed #opensource #intel #corei9 #RTX3060 #12gbvram #digitalsouveranitat #digitalsovereignty #generativeAI #flux #fluxai #ForgeUI #sdxl
Dream designer, 2025.
Art deco AI generated image of a woman at her desk in front of a screen with a sort of web on it, but it is not a spider web, more like a dream catcher from Native American heritage. She seems lost in her thoughts or thinking about something.
NB: see Qwen3 VL 30B A3B Q4 alt text in alt text. Created in 🇫🇷 under 170W.
#genAIalttext #AIArt #SDXL #Qwen3VL#alttext
Xây dựng trạm làm việc AI nhỏ với 2× RTX 5090 cho SDXL, sinh video và suy luận LLM thường xuyên. Tìm kiếm giá trị tốt nhất cho hiệu suất #AI #SDXL #LLM #GPU #TrạmLàmViệc #Intel #NVIDIA #RTX5090 #SinhVideo #SuyLuậnLLM #TrựcTiếpTrênPrem
https://www.reddit.com/r/LocalLLaMA/comments/1oo5862/dual_5090_work_station_for_sdxl/
WARNING about the malicious code called "Reasoning", which is added to LLM, GPT and AI from Moscow and interposed as a man-in-the-middle. #Gemini #Bing #Grok #Mistral #Flux #Suno #Qwen #Deepseek #Deepl #SDXL #Dreamshaper #Imagen #Veo #Kling #Gemma #Eliza #Llama https://github.com/ohm-raumzeit/itheereum-os
Social media streamers keep talking about speed for #LLMs, it is wrong: who needs a system that produces subpart outputs that one needs to painstakingly verify by hand? 6GB is enough. Cloud ones are not accurate either: can't be trusted at all sizes.
Summary:
- #GPT-OSS-20B Q8 works @ 5 tok/s, not accurate, reasoning take lots of tok.
- #Qwen3 VL 30B A3B Q4, same @ 5 toks/s.
- #SDXL is fine, relatively fast, great for inspiration.
- #Blender, not enough, very slow.
NB: 32GB ram required.
Little update on my #neuralnetwork #experiment: Something happened. What happened is hard to say so far. The motionnet model was only trained on 3 videos for 2000 steps. One of the videos was a butterfly on a flower. (You can see it attached).
So this is the first actual test I've run successfully on the full pipeline. The prompt: A butterfly on a flower.
You can see the first frame (generated by #SDXL weights) and the generated #video.
Been messing around with a little #prototype #neuralnetwork. It behaves similarly to a #video #codec. It uses a frozen set of weights from #StableDiffusionXL and uses its latent space to carry forward to a set of networks that behave like the B-frame motion vectors used by codecs like #MPEG. These are smaller networks, letting me train it on my regular old #GPU while relying on the work of the big boys for generating I-frames via #SDXL.
At least that's the theory. Reality remains to be seen.
生成AIグラビアをグラビアカメラマンが作るとどうなる?第52回:オープン画像生成AIが怒涛の登場果たした2025年9月(西川和久)
https://www.techno-edge.net/article/2025/10/14/4655.html
#technoedge #テクノロジー #ニュース #レビュー #ゲーム #ガジェット #生成AIグラビアをグラビアカメラマンが作るとどうなる #FLUX_1 #SDXL #LLM #LoRA
Why is he wet? :pepe_sweat: