#snapraid

Anyone familiar with #SnapRAID/#OpenMediaVault can tell me why SnapRAID sync that happens on a schedule post-SnapRAID diff sometimes syncs, sometimes don't?

Like in this case, it doesn't sync and seems to only scan 1 of 2 of my data storage (
data2):

There are differences!
SnapRAID DIFF finished - Wed Jun 11 04:31:38 +08 2025
----------------------------------------
Changes detected [A-131,D-2,M-0,C-0,U-142] -> there are updated files (142) but update threshold (0) is disabled.
Changes detected [A-131,D-2,M-0,C-0,U-142] -> there are deleted files (2) but delete threshold (0) is disabled.

SnapRAID SYNC Job started - Wed Jun 11 04:31:38 +08 2025
----------------------------------------
Self test...
Loading state from /srv/dev-disk-by-uuid-<data1>/snapraid.content...
Scanning...
Scanned data2 in 50 seconds
SnapRAID SYNC Job finished - Wed Jun 11 04:33:10 +08 2025

but then the next day, it does sync and seems to scan both my data drives (
data1 and data2):
There are differences!
SnapRAID DIFF finished - Thu Jun 12 04:31:45 +08 2025
----------------------------------------
Changes detected [A-189,D-3,M-0,C-0,U-143] -> there are updated files (143) but update threshold (0) is disabled.
Changes detected [A-189,D-3,M-0,C-0,U-143] -> there are deleted files (3) but delete threshold (0) is disabled.

SnapRAID SYNC Job started - Thu Jun 12 04:31:45 +08 2025
----------------------------------------
Self test...
Loading state from /srv/dev-disk-by-uuid-<data1>/snapraid.content...
Scanning...
Scanned data2 in 64 seconds
Scanned data1 in 90 seconds
Using 863 MiB of memory for the file-system.
Initializing...
Hashing...
# doing hashing stuffs
Everything OK
Resizing...
Saving state to /srv/dev-disk-by-uuid-<data1>/snapraid.content...
Saving state to /srv/dev-disk-by-uuid-<data2>/snapraid.content...
Verifying...
Verified /srv/dev-disk-by-uuid-<data2>/snapraid.content in 5 seconds
Verified /srv/dev-disk-by-uuid-<data1>/snapraid.content in 6 seconds
Using 48 MiB of memory for 64 cached blocks.
Selecting...
Syncing...
# doing syncing stuffs

  data1 39% | ************************
  data2 27% | ****************
 parity 30% | ******************
   raid  1% | 
   hash  1% | 
  sched  0% | 
   misc  0% | 
            |______________________________________________________________
                           wait time (total, less is better)

Everything OK
Saving state to /srv/dev-disk-by-uuid-<data1>/snapraid.content...
Saving state to /srv/dev-disk-by-uuid-<data2>/snapraid.content...
Verifying...
Verified /srv/dev-disk-by-uuid-<data1>/snapraid.content in 3 seconds
Verified /srv/dev-disk-by-uuid-<data2>/snapraid.content in 3 seconds
SnapRAID SYNC Job finished - Thu Jun 12 04:36:09 +08 2025

Everyday, I'm not quite confident that my SnapRAID array is
ready for a failure or not, cos I assume on days when it did not sync - that means the parity drive has no idea or copy of the latest set of files, no?

I think the one big 'flaw' or downside to #SnapRAID is that, it's not really #RAID is it if you're sorta meant to exclude certain directories from the parity storage like #Docker stuffs, etc. (mostly, 'moving parts')

When it comes to 'actual' RAID, you can easily replace disk(s), no issue - the replacement disk would be 1:1 exactly the way the old disk was. With SnapRAID, that might not be the case.

I think I might RMA that one SATA SSD - that SSD is currently setup as 1 of 2 #SnapRAID data drives (there's another identical parity drive), that is then part of a #MergerFS pool.

There are a ton of video guides on how to replace a disk part of a
#RAID array for failure/upgrade reasons on #TrueNAS, but not found one for this SnapRAID-MergerFS setup on #OpenMediaVault.

From what I can tell, there doesn't seem to be a graphical option on
#OMV to do this easily/intuitively. Might need to look up some written guides/docs to do this safely, so I can send the SSD back, and recover my data on the replacement drive once it arrived.

---

Hopefully, this written guide should suffice:

🔗 https://wiki.omv-extras.org/doku.php?id=omv7:omv7_plugins:snapraid#recovery_operations

RE: https://sakurajima.social/notes/a8mrrt78bi

I've managed to get #OpenMediaVault working on my #RaspberryPi (running #Raspbian Lite) and the performance seems pretty impressive! Despite relying on USB storage for the SSDs.

This is my first time running a
#NAS on the Pi, on #OMV, not using #ZFS or #RAID but rather an #Unraid like solution, 'cept, #FOSS called #SnapRAID in combination with #mergerfs (the drives themselves are simply #EXT4).

So far, honestly, so good. I got 2x 1TB SSDs for data, and another 1TB SSD for parity. Don't have a backup for the data themselves atm, but I do have a scheduled backup solution (
#RaspiBackup) setup for the OS itself (SD card). It's also got #Timeshift for creating daily snapshots.

I'm not
out of the woods yet though, cos after this comes the (somewhat) scary part - deploying #Immich on the Pi lol (using OMV's #Docker compose interface perhaps). I really could just deploy it in my #Proxmox #homelab, and I wouldn't have to worry about system resources or hardware transcoding, etc. but I really wanna experiment this 'everything hosted/contained in 1 Pi' concept.

What's a good #SnapRaid exclusion rule list (for #OpenMediaVault/#OMV, if that matters)?

]|-' ([ ]) ]|\|[ ']['seth_arimainyu@ieji.de
2025-05-21

@krutonium ok. Sorry for the eagerness. Im still surprised about #snapraid fix, that in every opportunity I recommended it :D

]|-' ([ ]) ]|\|[ ']['seth_arimainyu@ieji.de
2025-05-21

@krutonium no clue, I know is an annoying comment, sorry.

Maybe, for future reference, you could think about #snapraid. I strongly recommend it. Is an amazing tool, I use it to backup my media library (about 30 Tb). Runs on #linux and #windows

snapraid.it/

Last month I lost a whole 8TB disk and snapraid restored everything in a breeze. And, unlike #Raid solutions, yo can use the rest of you array/data/media while recovering, simply #genious

Installing #OpenMediaVault on the #RaspberryPi so far... has been kind of a headache, despite how it always seemed 'simple' on other people's videos... Kinda wondering if I'm better off, just, setting up #SnapRAID, #mergerfs, and #Samba/#CIFS manually.

I don't think I need
#OMV specifically, using the Pi as a simple #SMB share that I also happen to run some #Docker services on, I just thought it'd be neat to finally try out OMV, after having only used and been familiar with #TrueNAS all this while.

--

Well, while it was busy installing (which I initiated remotely, through SSH), for some fuckin' reason I got disconnected to it and I couldn't reconnect to it through SSH again. Why does this only happen when I'm a million miles away (exaggerating) and have no physical access to it but when I'm literally in the same building, doing the same type of thing thousands of times, everything just.. goes swell to the point that it deluded me into thinking
#KVM 's a waste of money :')

2025-04-23

Part 1 of a massive new video series I've been working really hard on behind the scenes for the last few weeks.

100% open source. 100% free.

Join me on the journey to build the Perfect Media Server.

youtu.be/Yt67zz9p0FU

#selfhosting #mediaserver #homeinfra #mergerfs #snapraid #docker

Mona :frogsleepy: 🌺Sirs0ri@corteximplant.com
2025-04-07

This is a longshot, but do I know anyone who's familiar with #SnapRAID? I'm running a regular sync/scrub on my NAS through cronjobs, and I want to monitor them for errors - so docs about the possible exit codes the tool has would be incredibly useful, but I can't find anything

Can I rely on it returning anything other than "0" in case anything out of the ordinary is detected, or should I look into parsing the output of the respective commands?

2025-03-11

Had a disk going read-only on me yesterday. I started getting SMART errors back in September but never got around to get a replacement.
I felt confident though, since I run #snapraid I would just restore the failed disk from the parity once it failed. However, once I replaces the disk and ran ’snapraid -e fix’, I only got back like 20% of the files. Seems I haven’t had a successful sync of that disk for a good while. Lesson learned.

Christian Pietschchpietsch@fedifreu.de
2024-12-22

@phillo I did not know about #SnapRAID. The description reads great for #selfhosting people. Do let us know if it helped you out of this mess!

Philip Stellerphillo@fedifreu.de
2024-12-22

Accidently deleted 43757 Files while using rsync with --delete in the wrong direction. That went quickly down the drain. Beware ⚠️

Plan A: recovering from (Pseudo-)RAID (Snapraid ftw). Now running.
Plan B: getting a Backup from another Location by bike.

Always make Backups (and check them regularly)...
#fail #delete #snapraid #backup

My GoHardDrive RMA Story:

- A #Plex HDD failed Sun night
- Emailed GoHardDrive for warranty RMA Mon AM
- Immediately stopped all writing to SnapRAID Array to preserve parity
- Got a prepaid shipping label Mon PM
- Shipped bad HDD from MI to CA Tues AM
- Got new HDD the following Tues
- Ran disk-spinner to check sectors
- Restored data with #SnapRAID fix (everything was recovered!)
- Running extended SMART test
- Just need to point mergerfs from my backup HDD to the new RMA when done

#homelab

Welp. Looks like it's time to test GoHardDrive's warranty process. This drive's toast (or close to it).

Currently using TestDisk to transfer as much as I can to another drive. (I also have the SnapRAID parity to rebuild off from)

#homelab #SnapRAID #OpenMediaVault

SMART HDD report showing 3 reallocated sectors and 112 pending sectors.TestDisk status: 923 files copied from failing disk, 0 failed.

Filed a #Sonarr Github Issue yesterday, and the devs already have a fix in the pipeline for the constantly-updating timestamps. Nice!

github.com/Sonarr/Sonarr/commi

I had disabled all timestamp updates in the -arrs, but it looks like it can be turned back on soon (in Sonarr, at least), without making #SnapRAID go nuts.

I guess I should check this behavior in the other -arrs and see if they need Issues filed, too...

#homelab

Apparently #SnapRAID syncs are supposed to be fast.

#Radarr & #Sonarr update my media timestamps with the original airdate on every rescan.

In doing so, a time with an exact-zero nanoseconds is used. SnapRAID hates this, and nudges you to use `snapraid touch` to apply a random nonzero nanosecond value to the timestamp.

Then Sonarr/Radarr comes back and resets the nanoseconds to 0.

Turns out my 2-hour SnapRAID syncs finish in just a few seconds if the timestamps don't keep changing. #homelab

Bought a MediaSonic PROBOX 4-bay enclosure and one of those 12TB refurb enterprise HGST Ultrastar HDDs from GoHardDrive.

Starting slow. But I think I'll buy a couple more of those HDDs soon and spin up some #SnapRAID for my developing #Plex hoarding problem.

And to think, I only had 1.5TB on my NAS less than 2 years ago.

#homelab

Chris :tux:ATLChris@mas.to
2023-07-08

After looking more into #TrueNAS Scale, I decided to just stick with #Snapraid + #MergerFS for my Plex Media. I like having the ability to mix and match drives and easily expand my storage.

I have never understood why RAID is so much more popular than SnapRaid.

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst