#SYCL

2025-05-06

@jannem llama.cpp has support for Vulkan and OpenCL. It also supports SYCL, if that's interesting to you. #OpenCL #Vulkan #SYCL

2025-05-04

Dynamic Memory Management on GPUs with SYCL

#SYCL #CUDA #HIP #Performance #Memory #Package

hgpu.org/?p=29881

2025-04-24

Parece que con #SYCL finalmente hay una interfaz de programación de GPU común capaz de obtener buen rendimiento con todos los fabricantes.

2025-04-10

What an honor to start the #IWOCL conference with my keynote talk! Nowhere else you get to talk to so many #OpenCL and #SYCL experts in one room! I shared some updates on my #FluidX3D #CFD solver, how I optimized it at the smallest level of a single grid cell, to scale it up on the largest #Intel #Xeon6 #HPC systems that provide more memory capacity than any #GPU server. 🖖😃

Me doing the IWOCL 2025 keynote talk
2025-04-08

Just arrived in wonderful Heidelberg, looking forward to present the keynote talk at #IWOCL tomorrow!! See you there! 🖖😁
iwocl.org/ #OpenCL #SYCL #FluidX3D #GPU #HPC

me in Heidelberg with the Neckar river in the background
2025-04-04

My brain is absolutely fried.
Today is the last day of coursework submissions for this semester. What a hectic month.
DNN with PyTorch, Brain model parallelisation with MPI, SYCL and OpenMP offloading of percolation models,hand optimizing serial codes for performance.
Two submissions due today. Submitted one and finalising my report for the second one.
Definitely having a pint after this

#sycl #hpc #msc #epcc #cuda #pytorch #mpi #openmp #hectic #programming #parallelprogramming #latex

2025-04-02

Started SYCL this semester in my MSc, and I have a coursework on it.
I have never been more frustrated in my life.
I am not saying SYCL is bad. I might just be too dumb to master it in a sem in order to port an existing CPU code to use MPI & SYCL together.
CUDA was much easier for me for the same task.

#sycl #hpc #parallelprogramming #gpu #nvidia #cuda #msc #scientificcomputing #amd #mpi #epcc

pafurijazpafurijaz
2025-03-31

It seems that could be the real alternative for using on GPUs or CPUs of any brand, without necessarily having to rely on or 's . I thought was the alternative. This might finally free us from of monopoly .

Vulkan logo

Managed to get an #Intel Arc A750 #gpu running on #risc_v using #OpenCL, #SYCL, and #AdaptiveCpp. Software PR's submitted for review.

#hpc #supercomputing

@risc_v

2025-03-23

The Shamrock code: I- Smoothed Particle Hydrodynamics on GPUs

#SYCL #ROCm #CUDA #PTX #OpenMP #MPI #Astrophysics #Physics #Package

hgpu.org/?p=29827

2025-03-23

ML-Triton, A Multi-Level Compilation and Language Extension to Triton GPU Programming

#SYCL #CUDA #oneAPI #AI #Triton #Compilers #Intel

hgpu.org/?p=29825

2025-03-23

Concurrent Scheduling of High-Level Parallel Programs on Multi-GPU Systems

#SYCL #TaskScheduling #PerformancePortability #HPC #Package

hgpu.org/?p=29823

2025-03-10

Even now, Thrust as a dependency is one of the main reason why we have a #CUDA backend, a #HIP / #ROCm backend and a pure #CPU backend in #GPUSPH, but not a #SYCL or #OneAPI backend (which would allow us to extend hardware support to #Intel GPUs). <doi.org/10.1002/cpe.8313>

This is also one of the reason why we implemented our own #BLAS routines when we introduced the semi-implicit integrator. A side-effect of this choice is that it allowed us to develop the improved #BiCGSTAB that I've had the opportunity to mention before <doi.org/10.1016/j.jcp.2022.111>. Sometimes I do wonder if it would be appropriate to “excorporate” it into its own library for general use, since it's something that would benefit others. OTOH, this one was developed specifically for GPUSPH and it's tightly integrated with the rest of it (including its support for multi-GPU), and refactoring to turn it into a library like cuBLAS is

a. too much effort
b. probably not worth it.

Again, following @eniko's original thread, it's really not that hard to roll your own, and probably less time consuming than trying to wrangle your way through an API that may or may not fit your needs.

6/

2025-02-18

I'm getting the material ready for my upcoming #GPGPU course that starts on March. Even though I most probably won't get to it,I also checked my trivial #SYCL programs. Apparently the 2025.0 version of the #Intel #OneAPI #DPCPP runtime doesn't like any #OpenCL platform except Intel's own (I have two other platforms that support #SPIRV, so why aren't they showing up? From the documentation I can find online this should be sufficient, but apparently it's not …)

2025-02-03

CPU-GPU co-execution through the exploitation of hybrid technologies via SYCL

#SYCL #OpenCL #CUDA #LLVM #PerformancePortability #LoadBalancing #HybridComputing

hgpu.org/?p=29717

2025-01-27

Exploring data flow design and vectorization with oneAPI for streaming applications on CPU+GPU

#SYCL #oneAPI #Package

hgpu.org/?p=29705

2025-01-16

HiPEAC 2025 kicks off next week and we're excited to be featured in two great sessions on Safety Critical and developing highly parallel applications. If you're attending HiPEAC in Barcelona, be sure to come and see us!

More information in this sessions: khronos.org/events/hipeac-2025
#SYCL #opencl #vulkan

2025-01-14

We're used to leaning on children's books in Computer Science - with Gulliver's big-endian vs little-endian. Back at Supercomputing hashtag#SC24, I spoke at the hashtag#Intel booth all about open standards, performance portability, and the journey up the Yellow Brick Road to see the Wizard of Oz. Check out the video of the talk on YouTube:
youtu.be/xO8FGAOScpo?si=_BnVil
#performanceportability #OpenMP #SYCL

2024-12-15

Analyzing the Performance Portability of SYCL across CPUs, GPUs, and Hybrid Systems with Protein Database Search

#SYCL #oneAPI #Bioinformatics #Databases #HPC #PerformancePortability #Package

hgpu.org/?p=29596

2024-11-24

Performance portability via C++ PSTL, SYCL, OpenMP, and HIP: the Gaia AVU-GSR case study

#HIP #SYCL #OpenMP #CUDA #PerformancePortability #HPC #Astrophysics #Package

hgpu.org/?p=29555

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst