#resilver

Joerg Jaspert :debian:Ganneff@fulda.social
2025-05-06

Weia. Neue #hdd eingebaut, #resilver vom #zfs gestartet. Binja mal gespannt, ab wann die Angabe, wie lange es noch braucht, halbwegs stabil bleibt.

Fing an mit ΓΌber 3 Tagen Restzeit. Nach nun knapp 1.2% (eins PUNKT zwei, nicht 12!) erledigt sehen wir noch 18 Stunden Restzeit.

(Nach 2.21% sinds noch 11:52 Stunden)

Lt. iotop Total Disk Read zwischen 900 und 1400 M/s - heissa, soviel ist sonst nie los. 😁

2024-12-30

After almost 8 years of use (64613 power on hours) this bad boy finally gave up the ghost #homelab #bad_hdd #westerndigital Manufactured 26/09/2016 so I feel it delivered value. No data lost #zfs and a #resilver in progress

A 3.5" Western Digital Blue hard disk drive, sitting on a desk

lesson learned on raid array grow - probably much faster to just move all data off your array, part and format all disks, make new array and then add all data back on rather than trying to grow/resize but i did not have removable drive big enough or enough space on other machines so - doing new things allows one to gain appreciation #resilver

2024-08-13

The joys of summer #homelab #zfs #resilver

pool: storage state: DEGRADED status: One or more devices is currently being resilvered. The pool will continue to function, possibly in a degraded state. action: Wait for the resilver to complete. scan: resilver in progress since Tue Aug 13 09:43:59 2024 1.69T scanned at 4.34G/s, 1.41T issued at 3.64G/s, 36.0T total 0B resilvered, 3.93% done, 02:42:11 to go
2024-05-29

New 𝗭𝗙𝗦 π—₯π—²π˜€π—Άπ—Ήπ˜ƒπ—²π—Ώ 𝗼𝗻 𝗦𝗠π—₯ π——π—Ώπ—Άπ˜ƒπ—²π˜€ (ZFS Resilver on SMR Drives) article on vermaden.wordpress.com blog.

vermaden.wordpress.com/2024/05

#data #freebsd #linux #nas #openzfs #resilver #server #storage #truenas #zfs

2024-05-08

@zirias

@parvXtl

One of the ways I help avoid failures with raidz1 is by replacing HDD when they get to around 5 years old, or they start showing errors. Also, monthly scrubs helps ensure that the drives are getting fully utilized regularly which helps keep things healthy/accelerate detection of failures.

#resilver #FreeBSD #RAID #ZFS #raidz1 #backup

Felix Palmen :freebsd: :c64:zirias@bsd.cafe
2024-05-07

@david_chisnall My pool isn't for a NAS but basically for "everything" I need, which includes serving files, but lots of other things (routing, firewall, internal MTA with mail storage, building base and ports, a windows server 2020 for work, and so on ...). Of course I avoid IO-heavy stuff during resilver, but it's still a busy machine, so I guess a "balanced" default tuning is actually what I want πŸ˜‰

Thanks for explaining how ETA of #ZFS #resilver ends up so "random" πŸ‘

Felix Palmen :freebsd: :c64:zirias@bsd.cafe
2024-05-07

For whatever reason, #resilver on my #FreeBSD server slowed down continuously, an initial ETA showed around 4 hours, by the time I left my desktop, it already showed around 10 hours. But hey, it finished without errors, so, all fine πŸ˜…

BTW, I really don't get why recently, you read a lot of stuff about how #RAID-5 (or the #ZFS equivalent #raidz1) was super dangerous and you should never use it 🫨

What's certainly true is: With larger pools and larger individual disks, the risk of a second disk failure during resilver significantly increases. But then, there's no "risk-free" storage, so a #backup is always a *must*.

What's also true is, raid-5/raidz1 is still the rendundancy scheme with the least storage overhead for most scenarios (3 and more disks). And of course it still reduces the risk of a failed pool. This pool here has only one of its original disks left, I didn't need my backup so far. 🀷

So please move the discussion of RAID back to a sensible base. The scheme/level you choose is always a trade-off between the cost (overhead) and the amount of risk reduction, and that's pretty much all there is....

I love my zfs of hot-pluggable SAS disks. A disk failed? No problem. Detach, physically replace and wait for the "resilver" magic. All while a time-machine and an "rsnapshot" backup are actively going on. And the disk array could even tolerate a second disk failure during reconstruction.
(resilvering starts slow, but will pick up more speed ...)
#selfhosting #SAS #scsi #zfs #raidz2 #NetBSD #backup #resilver

Screen photo of a server console showing the status of a zfs pool during "hot" reconstruction.
πŸ†‚πŸ…ΌπŸ…ΊπŸ…²πŸ…²smkcc@social.tchncs.de
2022-05-24

Puh, nach meinem Plattenausfall vom Server am Wochenende, resilver Prozess erfolgreich nach 12 Stunden Laufzeit abgeschlossen. Yippie. #hobbyadmin #truenas #festplattenausfall #resilver

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst