#DRBD

Child of darknessmcchaos@metalhead.club
2025-10-27

But why do I need a #proxmox test cluster?
Well, I have #glusterfs since a few month installed, and its quite easy and straight forward (and reliable so far).
Unfortunately Proxmox will not support it anymore and therefore I need an alternative solution.
#DRBD is my candidate to go, #ceph is far to much for my #homelab..

Rainer "friendica" Sokollrainer@friendica.sokoll.com
2025-07-01
Angenommen ich habe einen Host miteinem halben Dutzend #Docker Containern. Ich hätte den Host (also eigentlich die Container) gerne hochverfügbar.
Welche einfachen Lösungen bieten sich an?
Kubernetes wäre scheinbar overkill.
#Rancher? #K3s? #Portainer? Oder klassisch mit #Linux-HA und #DRBD? Oder ganz anders?

@ij ach Kuck an. Lange nicht mehr „Open-e“ gehört. Hatte das in der Uni (2011?) im Einsatz, für #proxmox. Hatten auch so manches Problem damit, vor allem wenn sich die Conf von dem internen #DRBD zerschossen hat. Aber es lief recht lange, bis mein Nachfolger es sinnigerweise durch #ceph ersetzt hat.

2024-07-25

Le seul point qui me chiffonne c’est la compilation du module noyau #DRBD9 même si un tout est fait pour gérer la compatibilité avec le noyau

linbit.com/blog/how-to-make-dr

Mais j’ai toujours du mal avec l’installation d’une chaîne de compilation complète sur mes serveurs.

Bon, c’est gérable de façon centralisée, paquet ou distribuer le binaire par #saltstack, #ansible ou autre

#virtualisation #kvm #OpenNebula #proxmoxVE #proxmox #xen #cloudStack #linbit #linstor #DRBD #LVM #sysadmin

2024-07-25

J’ai testé sur une petite infrastructure de test l’outil de stockage distribué #linstor basé sur #DRBD et qui s’intègre bien avec plusieurs systèmes de virtualisation

linbit.com/blog/comparing-open

linbit.com/blog/open-source-vm

#virtualisation #kvm #OpenNebula #proxmoxVE #proxmox #xen #cloudStack #linbit #LVM #sysadmin

2024-04-09

Und jetzt stellt sich die Frage, was machen. #glusterfs , oder #drbd
Geht mir um Performance und Sicherheit. Drei Server in unterschiedlichen Brandabschnitten hätte ich ja. Hmm....

2023-07-10

I’m also curious if anyone has looked at using #DRBD with #Umbrel in general (not just for #lightning).
[ #umbrelOS #btc #bitcoin #SelfHosting ]

2023-07-10

Anyone considered using #DRBD as the foundation of a highly available #lightning node? To me, it seems like a fairly straightforward way to mitigate a node being a single point of failure.
[ #value4value #bitcoin #btc #lnd ]

2023-02-14

hackage.haskell.org/package/hn
haskell implementation of nix includes #drbd by callpackage

2022-11-12

@ij with #xcp-ng/#xenserver I used ha-lizard (uses #drbd) for that. idk if there is something similar for proxmox. There the quorum is just a pingable host.

Dominik Zajacbanym@bsd.network
2022-02-05

Just remembered the time 2.6.18 was the #Linux kernel we used a long time with #Xen and #DRBD. At that time we had 32bit and 64bit in production and It was rock stable for quite some time.

2022-01-06
For a new e-mail cluster that will eventually consist of a number of #IMAP servers, I need some shared storage that all servers can read from and write to at the same time. I have some experience with #DRBD, but I was told that DRBD isn't going to be the solution for what I want.

It will start small, with only several hundreds of mailboxes, but it should be able to scale up to many thousands or even hundreds of thousands in the distant future. What I want is a variable number of #Dovecot servers with an HA Director in front of them, so that I can upgrade and reboot individual nodes without users noticing.

#NFS would -on paper- be ideal, but when working with lots and lots of small files it gets very slow. #ZFS is pretty cool, but that wasn't designed as a cluster filesystem and I'm not sure if it can be made to do that reliably. I find #GlusterFS and #GFS2 in many articles, but I don't have any experience with those. I have a bit of experience with #Ceph, just enough to know that I don't want that.

What do you guys think, what is the system I should go for? And, or course, why? Did I overlook systems that are worthwhile?
heise online (inoffiziell)heiseonline@squeet.me
2021-03-01
heise-Angebot: storage2day 2021: Open-Source-Speicher im Rechenzentrum

Einen Tag lang weiterbilden zu Open-Source-Speicher auf Commodity-Hardware – damit startet die Online-Konferenz storage2day ihr Programm 2021.
storage2day 2021: Open-Source-Speicher im Rechenzentrum
Open Minds AwardOpenMindsAward
2018-11-08

ist nominiert in der Kategorie Open Software. Jurybegründung: @philipp_reisner@twitter.com (Autor ) hat die Wartungsoberfläche um Anforderungen von und Lösungen erweitert. Mehr: openminds.at/nominees-open-sof @linbit@twitter.com @linuxwochen@twitter.com

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst