#zfs

Practical ZFS - DiscussionsPracticalZFS@feedsin.space
2025-07-05
Questions re: Optimizing SCRUB and SMART Test Schedules for HDD and NVME SSD Pools (Home/Small Office)

Hello,

I have two pools:

  1. HDD Pool: 4x Mirror VDEVs with 14TB enterprise HDDs; and
  2. SSD Pool: 1x Mirror VDEV with 2x 4 TB NVME.

I’m trying to optimize my SMART test and SCRUB schedules.
Currently, they look like this:

  1. SCRUB (per pool): Sunday at midnight, every 35 days.
  2. LONG (per pool): Once a week, Wednesday, at 7 PM.
  3. SHORT (per pool): Daily at 4 PM.

Some questions:

  1. Given the size of my disks, do I have the SCRUB set far enough apart from the LONG HDD tests to minimize the possibility of them running at the same time? I guessed at this scheduling, to be honest.
  2. I’ve seen suggestions that it’s sufficient for home/small office use to do the LONG HDD tests once a month instead of once a week, especially since ZFS adds another layer of health checks. Good idea/bad idea?
  3. I’ve also seen suggestions that running a LONG test on an NVME isn’t really beneficial enough to be worth it. I’m (vaguely) assuming that if they’re going to have problems, a SHORT test is sufficient to detect them combined with ZFS’ health checks. (Also, every NVME vendor seems to implement SMART their own special way, so maybe that has something to do with it.)

I’d really appreciate some advice so I could settle on a strategy and stop thinking about it. Thanks!

1 post - 1 participant

Read full topic

#zfs

vermadenvermaden
2025-07-05

Added 𝗨𝗣𝗗𝗔𝗧𝗘 𝟮 - 𝗜𝗻𝘁𝗲𝗿𝗶𝗺 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻 [UPDATE 2 - Interim Solution] to 𝗙𝗮𝗶𝗹𝗲𝗱 𝗕𝗮𝗰𝗸𝘂𝗽 𝗦𝗲𝗿𝘃𝗲𝗿 𝗕𝘂𝗶𝗹𝗱 [Failed Backup Server Build] article.

vermaden.wordpress.com/2025/05

2025-07-05

Added 𝗨𝗣𝗗𝗔𝗧𝗘 𝟮 - 𝗜𝗻𝘁𝗲𝗿𝗶𝗺 𝗦𝗼𝗹𝘂𝘁𝗶𝗼𝗻 [UPDATE 2 - Interim Solution] to 𝗙𝗮𝗶𝗹𝗲𝗱 𝗕𝗮𝗰𝗸𝘂𝗽 𝗦𝗲𝗿𝘃𝗲𝗿 𝗕𝘂𝗶𝗹𝗱 [Failed Backup Server Build] article.

vermaden.wordpress.com/2025/05

#verblog #freebsd #hardware #backup #NAS #rsync #linux #zfs #openzfs

Daniel Wayne Armstrongdwarmstrong@fosstodon.org
2025-07-04

New blog post!

ZFS snapshots are useful for tracking changes in my home directory, but these snapshots are stored locally on the device. They are not backups of the data. To employ snapshots as part of my personal backup strategy, these local snapshots need to be copied to a remote device.

I use zfs-send and zfs-receive to make the first full back up from my laptop to my home server. Subsequent incremental backups then track changes over time:

dwarmstrong.org/zfs-backups/

#ZFS #FreeBSD #RunBSD

Cory Albrechtbytor@mastodon.xyz
2025-07-04

If you use #ZFS and could use help managing your snapshots, please try my update of this tool and report any bugs that you find and features you think would be useful. I'd really like to keep this too alive and current and useful for the community.

And I wouldn't complain if you gave it a star on Github. 😉

Cory Albrechtbytor@mastodon.xyz
2025-07-04

There's a #ZFS tool I find very useful — ­zfs-auto-snapshot — but unfortunately the original project has not had any updates in over a year. 😞

I have it making snapshots monthly, weekly, daily, hourly, and every 15 minutes and deleting older ones (e.g. only 48 hourlies and 62 dailies) so they don't pile up.

So I added a feature and incorporated a bunch of others and some bug fixes that had been languishing in the original repo.

github.com/CoryAlbrecht/zfs-au

2025-07-04

Flattening the ZFS hierarchy on Ubuntu #snap #zfs

askubuntu.com/q/1552258/612

2025-07-03

Not even bad the latency (this is just about latency, not overall bandwidth which reaches the full 4.9Gbit via 2x 2.5Gbit links).

Taken inside a Debian VM running on a Proxmox node connected to the storage:

GMKTec G9 NAS
2x 2.5Gbit
NFS 4.2 (with pNFS)
2x WD Back SN7100 NVMe
Mirror mode ZFS

While this are already pretty awesome latencies, let's see how it performs with SPDK and NVMe-oF (TCP).

#homelab #proxmox #storage #zfs #gmktec #gmktecg9 #latency #freebsd #NFS #SPDK #NVMe #NVMEoF

2025-07-03

This how a failed GEOM Gate device in a #zfs mirror looks like after a ungraceful shutdown. The load on my 15+ year old laptop was too high I guess. #sshd suddenly logged me out after like 2 seconds, I couldn’t even login directly in front of the laptop. Console messages along the lines “jid0 couldn’t reclaim memory”. Had 3 jails, 2 VMs and a deduped ZFS pool running. Let’s see if I can keep this running if the Win7 VM’s memory is halved. Perhaps it is worth having a look at rctl…

Screenshot of the output of several FreeBSD commands. zpool status shows the ZFS mirror setup in status DEGRADED since the remote disk /dev/ggate0 is not available for the moment after a hard shutdown. The outputs of sysctl hw.model, sysctl hw.physmem, grep -E “memory.size|wired” vm-config and finally zfs get all zpool | grep dedup indicate that the system runs on an old AMD A4-5000 and 12GiB of memory while reserving 8GiB of memory for one VM and having deduplication activated. A final rctl command shows that resource control mechanisms have not been configured.
Practical ZFS - DiscussionsPracticalZFS@feedsin.space
2025-07-03
Snapshot Retention Policy Issues

Hi All,

I am seeking some help in understanding snapshot retention policies, I am clearly missing something.

I am using sanoid to create daily as well as monthly snapshots. I have also configured my TrueNAS backup server to pull these snapshots daily.

I was looking to configure a snapshot retention policy on TrueNAS as follows:
Keep daily for 365 days
Keep monthly for 84 months (for tax purposes)

I thought the way this should be done by creating two separate Replication Tasks in TrueNAS.

  1. Use the option to “include snapshots matching naming schema” with “autosnap_%Y-%m-%d_%H:%M:%S_daily” with a “snapshot lifetime” of 365 days to /pool/filing

  2. Use the option to “include snapshots matching naming schema” with “autosnap_%Y-%m-%d_%H:%M:%S_monthly” with a “snapshot lifetime” of 84 months to /pool/filing

When the second job in the order runs I get the error “No incremental base on dataset ‘pool/Filing’ and replication from scratch is not allowed.”

I assume this occurs because the second job has a snapshot prior to the current latest snapshot and cannot inject it back in time.

Is my understanding correct?
If that is the case, how should I be configuring this?

Thanks,

Adam

1 post - 1 participant

Read full topic

#zfs

Practical ZFS - DiscussionsPracticalZFS@feedsin.space
2025-07-03
Convert folder to dataset

Hello,

I have a dataset rpool/stuff which has a folder media in it. How can convert this folder into a dataset like rpool/stuff/media ?

Thanks

2 posts - 2 participants

Read full topic

#zfs

Garrett Wollmanwollman
2025-07-03

Now doing the "fix incorrect ashift on a root pool remotely over a serial console" dance for the umpteenth time in the last five weeks... Thankful that I almost always manage to get machines with redundant boot drives for just this eventuality.

Jason Tubnor 🇦🇺Tubsta@soc.feditime.com
2025-07-02
@psa @daedalus Another #FreeBSD host with large spinning rust on a NBN connection at a different location.

I did have one of those at one point. They are pretty good value for the storage. The vCPU and RAM don't matter if it is just a #ZFS target.
Jason Tubnor 🇦🇺Tubsta@soc.feditime.com
2025-07-02
@daedalus @psa The only thing I host on a VPS (Binary Lane) is a mail server because of IP reputation. I dragged everything back onto my residential ISP connection and with a /48, everything works great. With 9 hours of run time on the ghetto UPS and bi-annual updates to the firewall/bhyve infrastructure, I'm probably still getting 99.9% #FreeBSD #OpenBSD #ZFS

The only thing that caps my link out daily is the out of hours ZFS send/recv over IKEv2 to off-site backup but that flexible and ramp back if capacity is needed.
Stefano Marinellistefano@bsd.cafe
2025-07-01

Dev just messaged me, alarmed: a WordPress plugin's tech support logged into his site and broke everything! 😱
But Dev is sharp, and even at 22:10, he remembered his server runs FreeBSD and ZFS. With snapshots of his site and database every 15 minutes, we rolled back to the 20:00 snapshot, and his site was back up in a flash!

Thank you, FreeBSD! Thank you, ZFS!

#FreeBSD #ZFS #WebDev #WordPress #SysAdmin #DataRecovery #Snapshots #ThankYouTuesday

2025-07-01

Here's the customary #introduction: i'm into #C and tolerate C++ on a daily basis at work, i've also used others like java, kotlin, python, PHP, etc and am curious about #COBOL, #AdaLanguage and #erlang.

My dislike of jenkins is only surpassed by my hate of githubactions and everything MS-related. AI is not I, only A. I'm interested in #selfhosted stuff but atm that's a VPS with some sites, which doesn't really count. For now #syncthing is quite useful and #wireguard is on the horizon once i reformat/reinstall my current #gentoo (i'll keep the root #ZFS aproach and am on the fence regarding #XFCE or #KDE), would be interesting to have a barebones #KVM/#QEMU running all the stuff and i digress.

kthxbai\0

Practical ZFS - DiscussionsPracticalZFS@feedsin.space
2025-07-01
Is it safe to use a replicated copy of a dataset with permanent errors?

I’ve started getting weird slowdowns and occasional replication error on one dataset (named ephemeral) on my encrypted main (named hdd) pool recently. But the pool overall was usable, even the affected dataset. More than that, I successfully rsynced the entire pool to a different machine onto a BTRFS filesystem. Also tried to raw send it to another box in the meantime, but that also started failing. Most of the fails reported mismatched snapshots even when names and GUIDs matched. I’ve run scrubs and none of them repaired any data.

My first idea was that it was caused by a dodgy cable or hot-swap bay. So I just replaced the cables and plugged drives directly into a new HBA. Pool still reports permanent errors, but I did sync (not raw this time) the affected dataset with a second, also encrypted pool (named oldhdd) that already had an out of sync copy.

Is it safe to use the replicated copy on oldhdd (that’s where I want to keep that dataset) or should I rather recreate the dataset by rsyncing the original or my BTRFS copy? Snapshots are not important. And is it safe to continue using other datasets on hdd?

Current state is below. Yes, I know I need to upgrade the OS, want to solve current issue first.

# zfs --version
zfs-2.3.2-1
zfs-kmod-2.3.2-1

# uname -a
Linux menator 6.14.5-100.fc40.x86_64 #1 SMP PREEMPT_DYNAMIC Fri May  2 14:22:13 UTC 2025 x86_64 GNU/Linux

# zpool status -vx oldhdd
pool 'oldhdd' is healthy

# zpool status -vx hdd
  pool: hdd
 state: ONLINE
status: One or more devices has experienced an error resulting in data
        corruption.  Applications may be affected.
action: Restore the file in question if possible.  Otherwise restore the
        entire pool from backup.
   see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-8A
  scan: scrub in progress since Tue Jul  1 13:32:05 2025
        5.50T / 16.2T scanned at 3.07G/s, 30.8G / 16.2T issued at 17.2M/s
        0B repaired, 0.19% done, 11 days 09:49:18 to go
config:

        NAME                        STATE     READ WRITE CKSUM
        hdd                         ONLINE       0     0     0
          mirror-0                  ONLINE       0     0     0
            wwn-0x5000c500c9815a15  ONLINE       0     0     0
            wwn-0x5000039d68d9e476  ONLINE       0     0     0

errors: Permanent errors have been detected in the following files:

        hdd/ephemeral/vtorrent@syncoid_tao_2025-06-27:15:32:47-GMT02:00:<0x1>
        hdd/ephemeral/vtorrent@autosnap_2025-07-01_13:30:35_hourly:<0x1>
        hdd/ephemeral/vtorrent@autosnap_2025-07-01_12:03:46_hourly:<0x1>
        hdd/ephemeral/vtorrent@autosnap_2025-07-01_14:01:05_hourly:<0x1>
        hdd/ephemeral/vtorrent@syncoid_menator_2025-06-30:00:06:27-GMT02:00:<0x1>
        hdd/ephemeral/vtorrent@syncoid_ubuntu-cinnamon_2025-06-30:18:19:53-GMT00:00:<0x1>
        hdd/ephemeral/vtorrent:<0x1>
        hdd/ephemeral/vtorrent@autosnap_2025-07-01_00:00:46_daily:<0x1>
        hdd/ephemeral/vtorrent@autosnap_2025-06-30_00:01:02_daily:<0x1>
        hdd/ephemeral/vtorrent@autosnap_2025-07-01_11:00:17_hourly:<0x1>

1 post - 1 participant

Read full topic

#zfs

2025-06-30

@stefano Not as smooth to install (as it's not supported out the box by most distros), but I find ZFSBootMenu (typically in conjunction with rEFInd) works incredibly well for Linux ZFS-on-root. docs.zfsbootmenu.org/en/v3.0.x

#ZFS

2025-06-30

Latest 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝗡𝗲𝘄𝘀 - 𝟮𝟬𝟮𝟱/𝟬𝟲/𝟯𝟬 (Valuable News - 2025/06/30) available.

vermaden.wordpress.com/2025/06

Past releases: vermaden.wordpress.com/news/

#verblog #vernews #news #bsd #freebsd #openbsd #netbsd #linux #unix #zfs #opnsense #ghostbsd #solaris #vermadenday

vermadenvermaden
2025-06-30

Latest 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝗡𝗲𝘄𝘀 - 𝟮𝟬𝟮𝟱/𝟬𝟲/𝟯𝟬 (Valuable News - 2025/06/30) available.

vermaden.wordpress.com/2025/06

Past releases: vermaden.wordpress.com/news/

Client Info

Server: https://mastodon.social
Version: 2025.04
Repository: https://github.com/cyevgeniy/lmst