\o/ #tpus Tag \o/
\o/ #tpus Tag \o/
Verdammt. Ich glaube, ich kriege Schnupfen. Wehe. Ich will Donnerstag zum #tpus !! (Und überhaupt)
Mein Wochenplan:
Heute: Lesung
Morgen: Lesung
Mittwoch: Konzert
Donnerstag: #tpus
Freitag: Nix
Sonnabend: Lesung
#Nvidia secured a non-exclusive licensing agreement with #Groq, an #AIchip startup, for $20 billion. The deal aims to bring Groq’s CEO, #JonathanRoss, on board, along with their #inferencetechnology and #intellectualproperty. This move is seen as a strategic move to counter #Google’s success with #TPUs and maintain Nvidia’s dominance in the #AIchipmarket. https://spyglass.org/nvidia-groq-deal/?eicker.news #tech #media #news
Touching the Elephant – TPUs
https://considerthebulldog.com/tte-tpu/
#HackerNews #Touching #the #Elephant #TPUs #TPU #Technology #AI #Hardware #Machine #Learning
"If you go back a year or two, you might make the case that Nvidia had three moats relative to TPUs: superior performance, significantly more flexibility due to GPUs being more general purpose than TPUs, and CUDA and the associated developer ecosystem surrounding it. OpenAI, meanwhile, had the best model, extensive usage of their API, and the massive number of consumers using ChatGPT.
The question, then, is what happens if the first differentiator for each company goes away? That, in a nutshell, is the question that has been raised over the last two weeks: does Nvidia preserve its advantages if TPUs are as good as GPUs, and is OpenAI viable in the long run if they don’t have the unquestioned best model?
Nvidia’s flexibility advantage is a real thing; it’s not an accident that the fungibility of GPUs across workloads was focused on as a justification for increased capital expenditures by both Microsoft and Meta. TPUs are more specialized at the hardware level, and more difficult to program for at the software level; to that end, to the extent that customers care about flexibility, then Nvidia remains the obvious choice.
CUDA, meanwhile, has long been a critical source of Nvidia lock-in, both because of the low level access it gives developers, and also because there is a developer network effect: you’re just more likely to be able to hire low level engineers if your stack is on Nvidia. The challenge for Nvidia, however, is that the “big company” effect could play out with CUDA in the opposite way to the flexibility argument. While big companies like the hyperscalers have the diversity of workloads to benefit from the flexibility of GPUs, they also have the wherewithal to build an alternative software stack. That they did not do so for a long time is a function of it simply not being worth the time and troube..."
https://stratechery.com/2025/google-nvidia-and-openai/
#AI #GenerativeAI #Nvidia #Google #ChatGPT #OpenAI #LLMs #Chatbots #CUDA #GPUs #TPUs
"In the blistering race for AI supremacy, Nvidia has long reigned as the undisputed king. Its GPUs powered the explosive growth of machine learning, turning abstract neural networks into reality and fueling an empire valued at trillions. But as the AI landscape evolves, cracks are appearing in Nvidia's armor. The shift from model training (Nvidia's stronghold) to inference, the real-time application of those models, is reshaping the market. And at the forefront of this revolution stands Google's Tensor Processing Units (TPUs), delivering unmatched efficiency and cost savings that could spell the end of Nvidia's monopoly.
By 2030, inference will consume 75% of AI compute, creating a $255 billion market growing at 19.2% annually. Yet most companies still optimize for training costs. This isn't just hype; it's economics. Training is a one-time sprint, but inference is an endless marathon. As companies like OpenAI grapple with skyrocketing inference bills (projected at $2.3 billion for 2024 alone, dwarfing the $150 million cost to train GPT-4), Google's TPUs emerge as the cost-effective powerhouse. In this in-depth analysis, we'll explore how TPUs are winning the inference war, backed by real-world migrations from industry leaders, and why this pivot signals Nvidia's impending decline."
https://www.ainewshub.org/post/ai-inference-costs-tpu-vs-gpu-2025
#AI #AIInference #GenerativeAI #Nvidia #Google #GPUs #TPUs #LLMs
#Google is selling its #TPU #chips to external customers, challenging #Nvidia’s dominance in the #AIhardware market. #Anthropic, a major AI company, is a significant customer, purchasing over 1 million #TPUs. This move positions Google as a direct competitor to Nvidia, offering a differentiated #cloudprovider option with its #inhousesilicon design capabilities. https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-swing-at-the?eicker.news #tech #media #news
TPUs vs. GPUs and why Google is positioned to win AI race in the long term
https://www.uncoveralpha.com/p/the-chip-made-for-the-ai-inference
\o/ #tpus-Tag \o/
#Nvidia asserts its #GPUs are a generation ahead of #Google’s #AIchips, despite concerns about potential #competition. Nvidia highlights its chips’ flexibility and power compared to Google’s #ASIC chips, emphasising their ability to run every AI model. While Google’s #TPUs are gaining attention, Nvidia remains a dominant player in the #AIchipmarket. https://www.cnbc.com/2025/11/25/nvidia-says-its-gpus-are-a-generation-ahead-of-googles-ai-chips.html?eicker.news #tech #media #news
Start 2027 🛰️ Erste Test #Satelliten mit jeweils vier #TPUs sollen 2027 starten und das Konzept real erproben.
Herausforderungen ⚠️ #Strahlung im #All kann #Chips schädigen, doch erste Tests zeigen, dass die Hardware Missionen von fünf bis sechs Jahren verkraften könnte.
👉 https://eicker.TV ▹ #Technik #Medien #Politik #Wirtschaft ⏻ https://eicker.BE/ratung #Onlinestrategie und #Onlinemarketing von Gerrit Eicker aus #Münster im #Münsterland in #Westfalen
#Google plans to launch #AIchips, known as Tensor Processing Units (#TPUs), into low-earth #orbit to power #datacentres with #solarenergy. The TPUs, attached to #satellites, will form a constellation for #highbandwidth #communication, potentially becoming economical by 2035. Challenges include radiation exposure and ensuring chip longevity. https://www.semafor.com/article/11/04/2025/google-wants-to-build-solar-powered-data-centers-in-space?eicker.news #tech #media #news
Der #tpus trifft sich das nächste Mal am 30. Oktober im Harp. Nicht das jemand sagen kann, er hätte nichts gewusst.
Eine weitere wichtige Frage bei diesem #tpus stellt @schwobimexil.bsky.social:
„Warum sind Schlüsseldienst und Schusterei immer zusammen?0
Der #tpus trifft sich das nächste Mal am 25. September, 19 Uhr, an wohlbekanntem Orte.
Heute ist endlich wieder #tpus! Das heißt, ich werde das erste Mal seit Samstag wieder substanzielle Gespräche mit anderen Menschen führen.
Meta cierra un Acuerdo de 10.000 Millones de Dólares con Google Cloud para Impulsar su Apuesta por la IA
Meta ha sellado un acuerdo masivo con Google Cloud valorado en 10.000 millones de dólares para utilizar su infraestructura de nube y potentes chips de IA. Esta asociación estratégica subraya la ambiciosa apuesta de Meta por el desarrollo de inteligencia artificial y la escala de recursos computacionales necesarios para competir en la carrera global por la IA.
En un movimiento que sacude los cimientos de la industria tecnológica, Meta, la compañía de Mark Zuckerberg, ha firmado un acuerdo de varios años con Google Cloud, en una asociación estratégica que se espera que alcance los 10.000 millones de dólares. El objetivo principal de este acuerdo es que Meta pueda acceder a la infraestructura de nube de Google, incluyendo un suministro garantizado de su hardware de vanguardia especializado en IA, como las unidades de procesamiento tensorial (TPUs) y las GPUs de Nvidia.
Esta inversión masiva se dirige a satisfacer la inmensa necesidad de potencia computacional que tiene Meta para el entrenamiento de sus modelos de inteligencia artificial, incluyendo la próxima generación de su popular modelo de lenguaje grande, Llama.
El acuerdo de 10.000 millones de dólares no solo es una cifra astronómica que resalta el costo de desarrollar IA de última generación, sino que también es un hito simbólico en la rivalidad entre ambas empresas. Si bien Meta y Google compiten en áreas como la publicidad digital y las redes sociales, este acuerdo demuestra que, en la carrera por la IA, el acceso a la infraestructura de la nube es un factor más crítico que la competencia directa. Google Cloud se posiciona como un socio esencial para las grandes compañías tecnológicas, ofreciéndoles los recursos que necesitan para construir sus propios sistemas de IA, en lugar de recurrir a servicios de la competencia.
Para Meta, esta inversión es una clara señal de su compromiso a largo plazo con la inteligencia artificial, reconociendo que la infraestructura de su propio centro de datos, aunque masiva, no es suficiente para mantenerse a la vanguardia.
#Acuerdo #CloudComputing #GoogleCloud #IA #InteligenciaArtificial #Inversion #Llama #Meta #Tecnologia #TPUs #arielmcorg #infosertec #PORTADA
🥊 Google just landed a massive uppercut in the AI-hardware showdown—but is Nvidia out cold—or is this just Round One? 🥊
Dive into the battle here: https://blog.noahrijkaard.com/google-uppercuts-nvidia-in-the-ai-boxing-match-will-they-be-able-to-stay-in-the-ring/
#AIHardware
#GPUWars
#NvidiaVsGoogle
#TPUs
#AIinfrastructure
#TechSmackdown
#FutureOfAI
#ComputeBattle
#BigTechRumble
#AIChips