#SLURM

2025-07-01

Finishing some runtime system work; decided to try a #deskpi #super6c cluster board.

An ITX form factor beowulf cluster is amazing.

#slurm & #nfs worked out of apt.

#ucx, #openmpi, #openshmem, #openpmix, #gasnet, & #hpx needed custom compilation.

@raspberrypi #arm #hpc #supercomputing

2025-06-23

Can we discuss the quantum superposition of #slurm node states `power_up` and `power_on`? The correct state name at a given point in time is whichever one I don't try first.

2025-06-18

Got #slurm up and running on my #kvm environment to test #slurm-web. Hopefully I can get everything looking pretty to present as a test run for work after the honeymoon!

#linux #hpc

AWS PCS ジョブの実行履歴を保存できる Slurm アカウンティングをサポートした Slurm 24.11 バージョンがリリースされました
dev.classmethod.jp/articles/aw

#dev_classmethod #HPC #AWS_Parallel_Computing_Service #AWS #Slurm

#AWS Parallel Computing Service (PCS) now supports accounting with #Slurm version 24.11 aws.amazon.com/about-aws/wh... #HPC via @awscloud.bsky.social

AWS Parallel Computing Service...

davecykldavecykl
2025-05-13

Another day, another brunch, another “legend edition”. Today it’s the , the national animal of , and flavour (some kind of berry flavour, possibly raspberry). Hopefully no unicorns were harmed or exploited in its production?

A can of Irn-Bru in its distinctive orange and blue colours, with a blue unicorn with an orange mane
davecykldavecykl
2025-04-30

At the risk of doing unpaid marketing for them, interesting to see that (a fizzy drink from , one of very few countries in the world where a local drink outsells Big Cola) currently has a flavour (vaguely fruity, less iron-y/ferric than normal). Hopefully no loch monsters were harmed or used in its production?

A can of Irn-Bru in its distinctive orange and blue colours, with a bright green dinosaur-like cartoon image of Nessie, the Loch Ness monster
Diego "dciangot" Ciangottiniimdciangot@hachyderm.io
2025-03-26

Get ready for the interSpace! We are coming to make #k8s extension to #SLURM, GPU on-demand VMs and CaaS a #cncf community effort!

l.infn.it/interlink

#hpc #hybridcloud #gpu #ai #ml #interLink

statquantstatquant
2025-03-21

people do anyone know of an example to send jobs to using the package?

Christian Meestersrupdecat@fediscience.org
2025-03-12

Due to maintenance of the cluster I usually work on, only minor updates to the #SLURM plugin for #HPC compatible workflows could be made.

However, it now has better GPU support by generic resources. Already on #PyPi, soon on #Bioconda, too.

2025-03-10
```
--job-name call-fastqTask-24-XXX-0 \
--cpus-per-task 1 \
--mem 1024M \
--time 3179733413 \
/share/software/apptainer/1.2.5/bin/apptainer
```

I'm sure slurm will run it someday, I'm sure.

if I just keep waiting, it'll run the job that wants a century of walltime.

surely.

#postdoc pro tip: watch the #slurm submissions to be sure the workflow engine is submitting sane things!
2025-03-05

Vous cherchez à optimiser vos ressources informatiques pour vos projets de science de la donnée et d'IA ?
Découvrez comment , un outil Open Source puissant, peut vous aider !

est un système très évolutif qui permet la gestion de clusters et l'ordonnancement des tâches. Retrouvez sur notre blog comment exploiter au maximum les performances de vos clusters grâce à lui.
🔗 worteks.com/blog/comment-utili

@ow2 @OpenInfra @fsfe

Illustration schéma réseau avec serveurs et différentes icones
Krzysztof Sakrejdawronglang@bayes.club
2025-02-26

#HPC Cluster based on redhat #Linux running #Nix through #slurm... surely no one will regret that!

Pierre Lindenbaumyokofakun@genomic.social
2025-02-17

#slurm + #nextflow my colleague cannot run nextflow. In the log I see nextflow.executor.SlurmExecutor - [SLURM] invalid status line: `squeue: error: Invalid user: ?`

the job is said completed by NF but I can see it running in squeue.

puzzled

Michael Sumnermdsumner@rstats.me
2025-02-11

#rstats future_map in #furrr on #slurm has stopped being my friend ... multicore or multisession, both take way longer than normal - tested on small sets with 6 cores, smallish sets with 24, and the real job with 128 cores

parallel::parLapply works fine in the small or all 128 cores

any ideas?

Christian Meestersrupdecat@fediscience.org
2025-02-05

When reading this call (mast.hpc.social/@sneuwirth/113) from @sneuwirth I think, I could tell a number of tales from my experience with developing the #Slurm plugin for #Snakemake (and other software over the years).

Particularly, issue reports telling what peculiarities fellow admins thought of (e.g. "we are not allowed to do this" and "we not that". "We have this setting" and "we that".) To develop #HPC software is a challenge in its own right. To support workflow software seems to develop into a nice challenge.

Could certainly deliver a funny few minutes. 😉 Perhaps I should collect anecdotes and write an article sometime? 🤔

2025-01-24

A little tutorial on speeding up bioinformatic work on a computing cluster by converting your loops to array jobs

#HPC #bioinformatics #slurm

plantarum.ca/2025/01/24/array-

[edit: fixed link]

Christian Meestersrupdecat@fediscience.org
2025-01-15

Again, we have a new patch-level release of the #Snakemake executor for #SLURM on #HPC systems.

Turned out, that some clusters do not allow account checking with `sacctmgr` (which was there for historic reasons), hence, we now have a fall-back to `sshare`.

See github.com/snakemake/snakemake for details.

#OpenScience #ReproducibleResearch #ReproducibleComputing

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst