#GPUComputing

2025-12-11

Se você já montou ou pelo menos configurou um PC na vida, provavelmente tem essa imagem na cabeça:

CPU: aquela parte pequena e quadrada, onde se encaixa direto na placa-mãe, no soquete, coloca pasta térmica e prende o cooler em cima.

moprius.com/2025/12/cpu-e-gpu-

2025-12-08

Unlock GPU acceleration with NVIDIA's cuTile and cutile-python

NVIDIA's cuTile is a novel programming model designed to streamline the development of parallel kernels for NVIDIA GPUs, enabling efficient execution of complex computations. By leveraging cuTile, developers can create high-performance applications that fully utilize the capabilities of NVIDIA's graphics processing units. The cuTile...

2025-12-07

Unlock GPU acceleration with NVIDIA's cuTile, revolutionizing parallel kernel development

NVIDIA's cuTile is a groundbreaking programming model designed to simplify the development of parallel kernels for NVIDIA GPUs, enabling developers to harness the full potential of GPU acceleration. By leveraging cuTile, developers can create high-performance applications that efficiently utilize the massively...

2025-11-12

Nebius Group reported a Q3 net loss of $120M amid heavy spending on AI infrastructure, but secured a $3B, five-year deal with Meta to provide cloud and GPU resources for next-gen AI models. The partnership strengthens Nebius’s position in the high-performance AI cloud market and underscores its long-term growth potential despite short-term losses.

#Nebius #Meta #AIInfrastructure #ArtificialIntelligence #CloudComputing #GPUComputing #TECHi

Read Full Article Here :- techi.com/nebius-reports-q3-lo

2025-10-10

🚀 New on the Bioconductor Blog: GPU Support in Bioconductor

📝 Written by Andres Wokaty

Bioconductor is building stronger support for GPU-accelerated package development, enabling faster and more scalable analysis workflows.

Learn how package maintainers can take advantage of this new GPU infrastructure: blog.bioconductor.org/posts/20

#Bioconductor #GPUcomputing #Bioinformatics

Amanda Randles 🧪⚛️ 👩‍🔬profamandarandles.bsky.social@bsky.brid.gy
2025-06-10

🧪Curious about high performance across GPUs? Our new paper benchmarks a parallel FSI code on CUDA, SYCL & OpenMP across top systems. See Aristotle Martin present it at #ISC2025 on June 11, 10:45 in Hamburg! #HPC #GPUcomputing #PerformancePortability

N-gated Hacker Newsngate
2025-05-04

🚀 So, you think strapping consumer GPUs together is the tech equivalent of duct-taping a rocket? 🤔 GitHub's magical fairy dust promises to turn your GPU potato farm into a supercomputer, but only if you squint hard enough. 🥔✨
github.com/Foreseerr/TScale

2025-03-21

🚀 Ready to test the limits of performance?

Join the @EPCC Hackathon on AMD GPUs and explore the cutting-edge #MI300A and AMD’s Next Generation #Fortran Compiler with #OpenMP offload!

💻 Bring your code, ideas, and curiosity.
🔧 Optimize, accelerate, and innovate with us.
🏆 Let’s see what you can build!

🔗 archer2.ac.uk/training/courses

#AMDGPU #HPC #GPUComputing #Hackathon #OpenScience

apfeltalk :verified:apfeltalk@creators.social
2025-03-19

NVIDIA stellt DGX Spark und DGX Station vor: KI-Supercomputer für den Schreibtisch
NVIDIA hat auf der GTC 2025 zwei neue KI-Supercomputer vorgestellt, die erstmals Data-Center-Leistung auf den Desktop bringen
apfeltalk.de/magazin/news/nvid
#KI #News #DataScience #DGXSpark #DGXStation #GPUComputing #GraceBlackwell #HighPerformanceComputing #KIEntwicklung #KISupercomputer #MachineLearning #NVIDIADGX

iamchrisgiamchrisg
2025-02-27

This is a fantastic chance to contribute to the future of collider physics simulations! Interested? Find out more and apply here: smartrecruiters.com/CERN/74400

2025-01-14

And compression is now super fast!
💻Performance on Mac M1:
✅𝐂𝐨𝐦𝐩𝐫𝐞𝐬𝐬𝐢𝐨𝐧: 7 GB/s
✅𝐃𝐞𝐜𝐨𝐦𝐩𝐫𝐞𝐬𝐬𝐢𝐨𝐧: 8 GB/s
Wait till multithreading happens on GPU and you only decompress on demand

#compression

#llms

#GPUComputing

#ai

𝐏𝐚𝐩𝐞𝐫: alphaxiv.org/abs/2411.05239

Chapel Programming Languagechapelprogramminglanguage
2024-04-29

How does Chapel make moving data in and out of GPU memory easy? How about doing that on a supercomputer with 1000s of GPUs?

Check out our 2nd GPU programming article by Engin Kayraklioglu: chapel-lang.org/blog/posts/gpu

Chapel Programming Languagechapelprogramminglanguage
2024-04-24

Since its inception, GPU computing has struggled to balance competing demands for productivity, portability, and performance.

Can Chapel 2.0 fill this niche?

Virginia Tech's Paul Sathre will answer this question in his keynote address at ChapelCon '24!

Register here: hpe.zoom.us/meeting/register/t

MWibralmwibral
2022-02-08

In the long run it seems we have to replace in our scientific software, which used pyopencl for on all vendors' cards. Which way should we go?
?
We want , vendor neutrality, longevity of the software and an easy way to use it from python (ah, and performance, of course)

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst