#Rubin

Daniel Fischercosmos4u@scicomm.xyz
2026-02-05

NSF-DOE Vera C. #Rubin Observatory Observations of Interstellar Comet 3I/ATLAS (C/2025 N1): arxiv.org/abs/2507.13409 - the first 5 shown here were made before it was discovered (and the 4th could well be the only Rubin image out of focus ever shared with the public as the commissioning images were always held back before the big reveal show last June).

Gallery of serendipitous observations of 3I/ATLAS from the NSF-DOE Vera C. Rubin Observatory. All images are 30′′ × 30′′ and have been reprojected so that North appears up, and East to the left (green arrows). The anti-solar (yellow, black-outlined arrow) and anti-motion (black, red-outlined arrow) directions are indicated. All dates and times are Temps Atomique International (TAI). (a) 2025 June 21 08:11:32. (b) 2025 June 22 02:32:47. An area of roughly vertical saturation masking can be seen near the center of the frame; 3I/ATLAS is not within the masking, but the nearby blended star is. (c) 2025 June 22 03:07:49. (d) 2025 June 24 03:07:46. The image was unintentionally out of focus. (e) 2025 June 30 02:26:26. 3I/ATLAS is adjacent to the saturated star at the center. (f) 2025 July 02 00:44:25.
Daniel Fischercosmos4u@scicomm.xyz
2026-02-04

Strategy for Identifying Vera C. #Rubin Observatory #Kilonova Candidates for Targeted #GravitationalWave Searches: iopscience.iop.org/article/10. -> How Many Kilonovae Will Rubin Observatory Help Us Spot? aasnova.org/2026/02/04/how-man

2026-02-01

#TVL kde jsem nechala ten mobil...?
#Brno #Rubín

2026-01-18

NVIDIA’s new Inference Context Memory Storage Platform reshapes AI inference by treating KV cache as a multi-tier memory hierarchy—from HBM to NVMe SSD. This enables longer context windows, persistent reasoning, and scalable multi-agent inference while keeping hot data in GPU memory and offloading cold context to SSD.
buysellram.com/blog/nvidia-unv
#NVIDIA #Rubin #AI #Inference #LLM #AIInfrastructure #MemoryHierarchy #HBM #NVMe #DPU #BlueField4 #AIHardware #GPU #DRAM #KVCache #DataCenter #tech

2026-01-18

NVIDIA’s new Inference Context Memory Storage Platform reshapes AI inference by treating KV cache as a multi-tier memory hierarchy—from HBM to NVMe SSD. This enables longer context windows, persistent reasoning, and scalable multi-agent inference while keeping hot data in GPU memory and offloading cold context to SSD.
buysellram.com/blog/nvidia-unv
#NVIDIA #Rubin #AI #Inference #LLM #AIInfrastructure #MemoryHierarchy #HBM #NVMe #DPU #BlueField4 #AIHardware #GPU #DRAM #KVCache #DataCenter #tech

BuySellRam.comalexbsr@toot.io
2026-01-18

NVIDIA’s new Inference Context Memory Storage Platform reshapes AI inference by treating KV cache as a multi-tier memory hierarchy—from HBM to NVMe SSD. This enables longer context windows, persistent reasoning, and scalable multi-agent inference while keeping hot data in GPU memory and offloading cold context to SSD.
buysellram.com/blog/nvidia-unv
#NVIDIA #Rubin #AI #Inference #LLM #AIInfrastructure #MemoryHierarchy #HBM #NVMe #DPU #BlueField4 #AIHardware #GPU #DRAM #KVCache #DataCenter #tech

2026-01-18

NVIDIA’s new Inference Context Memory Storage Platform reshapes AI inference by treating KV cache as a multi-tier memory hierarchy—from HBM to NVMe SSD. This enables longer context windows, persistent reasoning, and scalable multi-agent inference while keeping hot data in GPU memory and offloading cold context to SSD.
buysellram.com/blog/nvidia-unv
#NVIDIA #Rubin #AI #Inference #LLM #AIInfrastructure #MemoryHierarchy #HBM #NVMe #DPU #BlueField4 #AIHardware #GPU #DRAM #KVCache #DataCenter #tech

2026-01-18

NVIDIA’s Inference Context Memory Storage Platform, announced at CES 2026, marks a major shift in how AI inference is architected. Instead of forcing massive KV caches into limited GPU HBM, NVIDIA formalizes a hierarchical memory model that spans GPU HBM, CPU memory, cluster-level shared context, and persistent NVMe SSD storage.

This enables longer-context and multi-agent inference by keeping the most active KV data in HBM while offloading less frequently used context to NVMe—expanding capacity without sacrificing performance. This shift also has implications for AI infrastructure procurement and the secondary GPU/DRAM market, as demand moves toward higher bandwidth memory and context-centric architectures.

buysellram.com/blog/nvidia-unv

#NVIDIA #Rubin #AI #Inference #LLM #AIInfrastructure #MemoryHierarchy #HBM #NVMe #DPU #BlueField4 #AIHardware #GPU #DRAM #KVCache #LongContextAI #DataCenter #AIStorage #AICompute #AIEcosystem #technology

2026-01-18

NVIDIA’s Inference Context Memory Storage Platform, announced at CES 2026, marks a major shift in how AI inference is architected. Instead of forcing massive KV caches into limited GPU HBM, NVIDIA formalizes a hierarchical memory model that spans GPU HBM, CPU memory, cluster-level shared context, and persistent NVMe SSD storage.

This enables longer-context and multi-agent inference by keeping the most active KV data in HBM while offloading less frequently used context to NVMe—expanding capacity without sacrificing performance. This shift also has implications for AI infrastructure procurement and the secondary GPU/DRAM market, as demand moves toward higher bandwidth memory and context-centric architectures.

buysellram.com/blog/nvidia-unv
#NVIDIA #Rubin #AI #Inference #LLM #AIInfrastructure #MemoryHierarchy #HBM #NVMe #DPU #BlueField4 #AIHardware #GPU #DRAM #KVCache #DataCenter #tech

2026-01-18

NVIDIA’s new Inference Context Memory Storage Platform reshapes AI inference by treating KV cache as a multi-tier memory hierarchy—from HBM to NVMe SSD. This enables longer context windows, persistent reasoning, and scalable multi-agent inference while keeping hot data in GPU memory and offloading cold context to SSD.
buysellram.com/blog/nvidia-unv
#NVIDIA #Rubin #AI #Inference #LLM #AIInfrastructure #MemoryHierarchy #HBM #NVMe #DPU #BlueField4 #AIHardware #GPU #DRAM #KVCache #DataCenter #tech

2026-01-18

NVIDIA’s Inference Context Memory Storage Platform, announced at CES 2026, marks a major shift in how AI inference is architected. Instead of forcing massive KV caches into limited GPU HBM, NVIDIA formalizes a hierarchical memory model that spans GPU HBM, CPU memory, cluster-level shared context, and persistent NVMe SSD storage.

This enables longer-context and multi-agent inference by keeping the most active KV data in HBM while offloading less frequently used context to NVMe—expanding capacity without sacrificing performance. This shift also has implications for AI infrastructure procurement and the secondary GPU/DRAM market, as demand moves toward higher bandwidth memory and context-centric architectures.

buysellram.com/blog/nvidia-unv

#NVIDIA #Rubin #AI #Inference #LLM #AIInfrastructure #MemoryHierarchy #HBM #NVMe #DPU #BlueField4 #AIHardware #GPU #DRAM #KVCache #LongContextAI #DataCenter #AIStorage #AICompute #AIEcosystem #tech

BuySellRam.comjimbsr
2026-01-18

NVIDIA’s new Inference Context Memory Storage Platform reshapes AI inference by treating KV cache as a multi-tier memory hierarchy—from HBM to NVMe SSD. This enables longer context windows, persistent reasoning, and scalable multi-agent inference while keeping hot data in GPU memory and offloading cold context to SSD.
buysellram.com/blog/nvidia-unv

Microsoft DevBlogsmsftdevblogs@dotnet.social
2026-01-16

CES 2026 spotlights NVIDIA Vera Rubin landing on Azure’s AI platform.

Years of co‑design and Fairwater upgrades mean Azure already supports Rubin’s power, cooling, NVLink‑6, HBM4/HBM4e, SOCAMM2 memory expansion and 1.6 Tb/s ConnectX‑9 networking — enabling ~50 PF NVFP4 per chip and ~3.6 EF per rack.

Integrated compute, storage, networking, orchestration and cooling enable faster deployment and scale for customers.

#AI #Azure #NVIDIA #Rubin #DataCenter

NVIDIA (@nvidia)

CES에서 젠슨 황이 Rubin 외에 GR00T, Alpamayo 등 이름을 거론하며 공장, 로봇, 차세대 자율주행 차량 등에서 물리적 AI가 구현되고 있음을 발표했다. 키노트 영상과 상세 발표 링크를 공유해 Rubin 기반의 실세계 적용 사례와 관련 제품군을 강조했다.

x.com/nvidia/status/2011119763

#nvidia #rubin #gr00t #alpamayo #physicalai

NVIDIA Data Center (@NVIDIADC)

지난주 CES2026에서 젠슨 황(Jensen Huang)이 NVIDIARubin이라는 새로운 'extreme-codesigned' 여섯 칩 AI 플랫폼을 공개했으며, 오픈 모델과 물리적 AI(physical AI) 분야의 주요 진전도 함께 발표했다. 전시장에서는 AI 인프라와 가속 컴퓨팅(accelerated computing) 기술을 시연해 하드웨어·소프트웨어 결합형 대규모 AI 플랫폼을 강조했다.

x.com/NVIDIADC/status/20111433

#nvidia #rubin #physicalai #acceleratedcomputing

2026-01-07

🧠 #NVIDIA presenta al #CES2026 la nuova piattaforma #Rubin: un’architettura #AI di nuova generazione progettata per scalare l’intelligenza artificiale a livelli senza precedenti. 

👉 I dettagli: linkedin.com/posts/alessiopoma

___
✉️ 𝗦𝗲 𝘃𝘂𝗼𝗶 𝗿𝗶𝗺𝗮𝗻𝗲𝗿𝗲 𝗮𝗴𝗴𝗶𝗼𝗿𝗻𝗮𝘁𝗼/𝗮 𝘀𝘂 𝗾𝘂𝗲𝘀𝘁𝗲 𝘁𝗲𝗺𝗮𝘁𝗶𝗰𝗵𝗲, 𝗶𝘀𝗰𝗿𝗶𝘃𝗶𝘁𝗶 𝗮𝗹𝗹𝗮 𝗺𝗶𝗮 𝗻𝗲𝘄𝘀𝗹𝗲𝘁𝘁𝗲𝗿: bit.ly/newsletter-alessiopomaro

#AI #GenAI #GenerativeAI #IntelligenzaArtificiale #LLM 

2026-01-07

🔍 Đánh giá sơ bộ chip R200 “Rubin” & RTX 6000 Rubin:
- VRAM HBM4 384 GB (2×8192‑bit), băng thông 22,528 GB/s, tốc độ 2.75 GHz.
- So với B200, hiệu năng FP16/F32 tăng từ 4.5 PF → 8 PF, F8/F16 tăng từ 9 PF → 35 PF, NVFP4 từ 9 PF → 50 PF.
- RTX 6000 Rubin dự kiến GDDR7 96 GB, 4.712 GHz, băng thông ~1.93 TB/s, TFLOPs ~1.1 PF.

💰 Người dùng đang dự trữ 10k USD cho sản phẩm này.

#AI #GPU #Rubin #R200 #Tech #CôngNghệ #Vietnam #AI #TechNews

reddit.com/r/LocalLLaMA/commen

Dr Mircea Zloteanu ❄️☃️🎄mzloteanu
2026-01-06

#457 Race as a Bundle of Sticks: Designs that Estimate Effects of Seemingly Immutable Characteristics

Thoughts: The theoretical framework a researcher uses will affect the causal inference they can make.

annualreviews.org/content/jour

Le site de Korbenkorben.info@web.brid.gy
2026-01-06

NVIDIA libère Rubin et Alpamayo - Quand l'IA passe la seconde (au sens propre)

fed.brid.gy/r/https://korben.i

<p>Après Blackwell, après les GPU qui chauffent comme des radiateurs nucléaires, après les promesses de révolution IA à chaque keynote, voici <strong>Rubin</strong> ! Et cette fois, NVIDIA ne se contente pas de balancer une nouvelle puce, mais lâchent carrément un modèle open source pour la conduite autonome.</p>
<p>Rubin, c'est donc la nouvelle architecture qui succède à Blackwell. Mais attention, on ne parle pas d'une simple évolution. C'est un système à <strong>6 puces</strong> qui travaillent de concert : le GPU Rubin évidemment, mais aussi le CPU Vera avec ses 88 cœurs Olympus, le NVLink 6 qui balance du 3,6 To/s par GPU, et toute une armada de DPU et de switches réseau. Le tout crache <strong>50 pétaflops</strong> en NVFP4 et divise par 10 le coût d'inférence par token par rapport à Blackwell. Pour entraîner des modèles MoE, vous aurez besoin de 4 fois moins de GPU. Pas mal pour faire baisser la facture électrique.</p>
<div class="youtube-container">


<div>
<p>Mais le truc qui m'a vraiment excité les neurones, c'est <strong>Alpamayo</strong>. NVIDIA appelle ça le &quot;<em>moment ChatGPT de l'IA physique</em>&quot; et pour une fois, j'crois pas que ce soit juste du marketing. Alpamayo 1, c'est un modèle vision-langage-action de 10 milliards de paramètres qui fait de la conduite autonome avec du raisonnement en chaîne. Concrètement, au lieu de juste détecter des obstacles et calculer une trajectoire, le truc réfléchit. Il décompose les situations complexes en sous-probl
S.v. N.Sönmeznsonmez84
2026-01-06

Nvidia, yapay zekada yeni bir dönemi başlatan Rubin platformunu tanıttı! Blackwell'e göre 5 kata kadar performans artışı sunan Rubin, HBM4 GPU'lar ve Vera CPU ile dikkat çekiyor. AI dünyası için büyük adım!

🚩

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst