#AIserver

eicker.news ᳇ tech newstechnews@eicker.news
2026-02-16

#DRAM and #NAND #memoryprices have surged over 600% in the past year, driven by #AIserver demand. This has severely impacted consumer electronics like routers and set-top boxes, with memory costs now exceeding 20% of the bill of materials for low-to-mid-end routers. counterpointresearch.com/en/in #tech #media #news

Yonhap Infomax Newsinfomaxkorea
2025-11-26

Dell Technologies shares jumped after hours as the company issued a strong outlook on AI-driven sales, offsetting weaker-than-expected Q3 results.

en.infomaxai.com/news/articleV

Yonhap Infomax Newsinfomaxkorea
2025-11-12

Foxconn, the world’s top AI server maker and Nvidia partner, beat Q3 profit forecasts with net income of $1.9 billion, driven by robust sales growth and a 30% stock surge in 2024.

en.infomaxai.com/news/articleV

IT'S LIVE! 🚀 Chiefgyk3d is BUILDING an AI/LLM server NOW! 🤯 Plus Homelab upgrades, Cybersecurity, and Linux gaming! Don't miss out! Join the chaos!
#AIserver #Homelab #Cybersecurity #LinuxGaming

kick.com/chiefgyk3d

🔴 LIVE • 1 viewers • Just Chatting
2025-09-17

Đang xây dựng server AI local với 2 lựa chọn GPU: hai RTX 4070 Ti Super hay hai RTX 5060 Ti? Muốn chạy LLM lớn hơn với 32GB VRAM, tốc độ ~50 tokens/s. Băn khoăn về hiệu năng PCIe 5.0, FP4 support và thiết lập dual-GPU. #AIServer #BuildPC #LocalAI #MáyTínhAI #XâyDựngPC

reddit.com/r/LocalLLaMA/commen

Debby ‬⁂📎🐧:disability_flag:debby@hear-me.social
2025-09-13

Hoi iedereen! 👋
Vragen aan de community:

Heeft iemand ervaring met deze GPU’s? Welke zou je aanbevelen voor het lokaal draaien van grotere LLMs?
Zijn er andere budgetvriendelijke server-GPU’s die ik misschien heb gemist en die geweldig zijn voor AI-workloads?
Heb je tips voor het bouwen van een kosteneffectieve AI-workstation? (Koeling, voeding, compatibiliteit, enz.)
Wat is jouw favoriete setup voor lokale AI-inferentie? Ik zou graag over jullie ervaringen horen!

Alvast bedankt! 🙌"
#AIServer #LokaleAI #BudgetBuild #LLM #GPUAdvies #ThuisLab #AIHardware #DIYAI #ServerGPU #TweedehandsTech #AIGemeenschap #OpenSourceAI #ZelfGehosteAI #TechAdvies #AIWorkstation #MachineLeren #AIOnderzoek #FediverseAI #LinuxAI #AIBouw #DeepLearning #ServerBouw #BudgetAI #AIEdgeComputing #Vragen #CommunityVragen

Debby ‬⁂📎🐧:disability_flag:debby@hear-me.social
2025-09-13

Hey everyone 👋

I’m diving deeper into running AI models locally—because, let’s be real, the cloud is just someone else’s computer, and I’d rather have full control over my setup. Renting server space is cheap and easy, but it doesn’t give me the hands-on freedom I’m craving.

So, I’m thinking about building my own AI server/workstation! I’ve been eyeing some used ThinkStations (like the P620) or even a server rack, depending on cost and value. But I’d love your advice!

My Goal:
Run larger LLMs locally on a budget-friendly but powerful setup. Since I don’t need gaming features (ray tracing, DLSS, etc.), I’m leaning toward used server GPUs that offer great performance for AI workloads.

Questions for the Community:
1. Does anyone have experience with these GPUs? Which one would you recommend for running larger LLMs locally?
2. Are there other budget-friendly server GPUs I might have missed that are great for AI workloads?
3. Any tips for building a cost-effective AI workstation? (Cooling, power supply, compatibility, etc.)
4. What’s your go-to setup for local AI inference? I’d love to hear about your experiences!

I’m all about balancing cost and performance, so any insights or recommendations are hugely appreciated.

Thanks in advance! 🙌

@selfhosted@a.gup.pe #AIServer #LocalAI #BudgetBuild #LLM #GPUAdvice #Homelab #AIHardware #DIYAI #ServerGPU #ThinkStation #UsedTech #AICommunity #OpenSourceAI #SelfHostedAI #TechAdvice #AIWorkstation #LocalAI #LLM #MachineLearning #AIResearch #FediverseAI #LinuxAI #AIBuild #DeepLearning #OpenSourceAI #ServerBuild #ThinkStation #BudgetAI #AIEdgeComputing #Questions #CommunityQuestions #HomeLab #HomeServer #Ailab #llmlab

What is the Best used GPU Pick for AI Researchers?
 GPUs I’m Considering:
| GPU Model            | VRAM          | Pros                                      | Cons/Notes                          |
| Nvidia Tesla M40          | 24GB GDDR5        | Reliable, less costly than V100              | Older architecture, but solid for budget builds |
| Nvidia Tesla M10          | 32GB (4x 8GB)     | High total VRAM, budget-friendly on used market | Split VRAM might limit some workloads |
| AMD Radeon Instinct MI50   | 32GB HBM2         | High bandwidth, strong FP16/FP32, ROCm support | ROCm ecosystem is improving but not as mature as CUDA |
| Nvidia Tesla V100         | 32GB HBM2         | Mature AI hardware, strong Linux/CUDA support | Pricier than M40/M10 but excellent performance |
| Nvidia A40                | 48GB GDDR6        | Huge VRAM, server-grade GPU                  | Expensive, but future-proof for larger models |
Yonhap Infomax Newsinfomaxkorea
2025-08-28

Dell Technologies shares fell nearly 4% after the company issued Q3 EPS guidance of $2.45, missing market expectations, despite strong AI server sales and an improved full-year outlook.

en.infomaxai.com/news/articleV

Yonhap Infomax Newsinfomaxkorea
2025-06-19

Kokolink unveiled its domestically developed high-performance server Klimax-408, aiming to boost South Korea's AI infrastructure with cost-efficient, globally competitive technology.

en.infomaxai.com/news/articleV

Yonhap Infomax Newsinfomaxkorea
2025-02-17
Yonhap Infomax Newsinfomaxkorea
2025-02-12

Super Micro Computer's stock tumbles 9% as Q4 earnings guidance disappoints investors, raising concerns about AI server market growth

en.infomaxai.com/news/articleV

Yonhap Infomax Newsinfomaxkorea
2025-02-11

Super Micro Computer's stock rallies for five consecutive days, marking longest uptrend in six months, as investors await crucial business update and earnings report

en.infomaxai.com/news/articleV

UnihostUnihost
2023-04-25

Rent a Dedicated servers for Neural networks and Big Data 👇👇
unihost.com/en/dedicated/bigda

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst