#zfs

Dan Langilledvl@bsd.network
2025-12-11

"One or more devices are configured to use a non-native block size."

Follow along, correct my math.

dan.langille.org/2025/12/11/co

#FreeBSD #ZFS

Eugene :freebsd: :emacslogo:evgandr@bsd.cafe
2025-12-11

#TIL that there are no knobs in #NetBSD to configure #ZFS ARC memory consumption.

I started to use ZFS for my disk with backups and digital archives near a month ago, because I didn't want to think about changing sizes of LVM partitions. And my homelab server has only 2 Gb of RAM and some swap :drgn_cry:

Whoops. Not a good surprise :drgn_blush:

Practical ZFS - DiscussionsPracticalZFS@feedsin.space
2025-12-11

Can you delay status of the live dataset between two hosts by retention policy with syncoid?

Hi there,

I am currently replicating a dataset from one host (storage-vm) to another host (live-vm) with syncoid on an hourly timer. The command that my systemd service uses is /usr/sbin/syncoid --recursive tank/dataset root@live-vm:tank/target and the snapshots it’s transferring come from sanoid, which snapshots the dataset hourly.

The retention policy on storage-vm is to keep 24 hourlies, 7 dailies, 3 monthlies. The retention policy on live-vm is only 24 hourlies. All of this works fine. Only _hourly snapshots make it to live-vm and they are pruned after 24h. So far, so good.

Now, what I want is to NOT transfer the state of the LIVE dataset between the two. What I explicitly want is: When I delete files and folders inside “storage-vm:tank/dataset” with rm that these files and folder should still be present on the live dataset “live-vm:tank/target” until the last snapshot that contained them is pruned, i.e. 24h after I deleted the files/folders on storage-vm.

Is something like this even possibly with ZFS/syncoid? ChatGPT gaslit me by saying that this were the standard behaviour of syncoid and that it “never transfers the state of the live dataset, only snapshots” and that my “files will only stop showing up in the live dataset on live-vm when the last snapshot that contains them is pruned there”. Both, of course, were not true.

After I told it that my files & folders got deleted on live-vm exactly 1h after I deleted them on storage-vm, i.e. after the very next snapshot replication and not 24h after, it advised to use the option “–no-sync-snap” but after reading the description of “–no-sync-snap” I fail to see how this should help.

Is there actually an option with ZFS/syncoid to delay deletion of files & folders between two synced hosts by whatever time period you set in the retention policies?

Thanks in advance!

1 post - 1 participant

Read full topic


zfs

2025-12-10

Alright, migration completed, replaced #proxmox by #freebsd 15 using bhyve for the vms, sharing #zfs pools via nfs where I need. Only missing piece is setting up samba shares for backups and copying media files. Every app is already running under #docker in a #debian vm. I also have to setup the cronjobs for replication and backups, but I will leave that for the weekend. #homelab life

🆘Bill Cole 🇺🇦grumpybozo@toad.social
2025-12-10

This is a hint of why I — a big fan of #ZFS since I was a Solaris admin — despise the fact that I have one Linux machine using ZFS and am planning to replace it without ZFS.

The technical approach taken to working around the perceived license risk makes the whole system unreliable. Every time I run into it I am grumpy. social.lol/@robn/1156936098794

Dan Langilledvl@bsd.network
2025-12-10

I recall seeing this before on this host, but I'd forgotten about it:

[16:49 r720-02 dvl ~] % zpool status data01
pool: data01
state: ONLINE
status: One or more devices are configured to use a non-native block size.
Expect reduced performance.
action: Replace affected devices with devices that support the
configured block size, or migrate data to a properly configured
pool.
scan: scrub repaired 0B in 00:15:40 with 0 errors on Mon Dec 8 04:05:38 2025
config:

NAME STATE READ WRITE CKSUM
data01 ONLINE 0 0 0
mirror-0 ONLINE 0 0 0
gpt/S59VNS0N809087J_S00 ONLINE 0 0 0 block size: 512B configured, 4096B native
gpt/S59VNJ0N631973D_S01 ONLINE 0 0 0 block size: 512B configured, 4096B native
mirror-1 ONLINE 0 0 0
gpt/S5B3NDFN807383E_S02 ONLINE 0 0 0 block size: 512B configured, 4096B native
gpt/S5B3NDFN807386P_S03 ONLINE 0 0 0 block size: 512B configured, 4096B native

errors: No known data errors

#FreeBSD #ZFS

2025-12-10

second #zfs raidz expansion is going a lot better than the first one where two disks failed. two more disks to go on this vdev then i can make another one with four disks and slowly scale it.

#freebsd #linux

zpool status screen showing raidz expansion at 85% with around 8 hours to go
2025-12-10

chris@arthur ~> sudo zfs destroy -v -r zroot/jails/mantis
will destroy zroot/jails/mantis
Read from remote host arthur.fritz.box: Connection reset by peer
Connection to arthur.fritz.box closed.
client_loop: send disconnect: Broken pipe

Trying to destroy a #ZFS filesystem on #FreeBSD reproducible kills ssh connection. W00t? Help!

2025-12-09

#zfs

that is all

2025-12-09

Oh I love it. ZFS is too nice. Created a mount point for an LXC within the UI of proxmox and made it 100GB instead of 10GB.

Editing the file in `/etc/pve/lxc/105.conf` to adjust the description and executing the cmd `zfs set refquota=10G tank/subvol-105-disk-1` made the disk/mountpoint smaller to the correct size. NICE!

#proxmox #lxc #zfs

Practical ZFS - DiscussionsPracticalZFS@feedsin.space
2025-12-09

8vs16 channel HBA LSI 9300 8/16

Hi,

I currently have an LSI 9300-8i (I think). I’m thinking about changing it for a 9300-16i. However, I think I heard these are just two 9300-8i on the same board and run awfully hot. My 9300-8i already runs pretty hot so if that’s the case, I’d likely get a second 9300-8i,

What do you think?

1 post - 1 participant

Read full topic


zfs

2025-12-09

Tác giả xây dựng lab tại nhà với 3 Lenovo Tiny + Pi 5: dùng Immich quản lý ảnh, Nextcloud lưu trữ cá nhân, ZFS với sao lưu ngoại vi, LLM địa phương và Pi-hole/VPN. Thiết kế phân chia chức năng rõ ràng: M920q #1 chạy Proxmox (VM/LXC), M920q #2 là NAS (TrueNAS_SCALE), M700 là điểm sao lưu. Tag: #Homelab #Tựhost #Nextcloud #ZFS #Linux #SSG #AIO #RaspberryPi

**Hashtagg:**
#Homelab #TựHost #Linux #Nextcloud #ZFS #SSG #AIO #RaspberryPi #Immich #LabTạiNhà #ThiếtLậpĐámMây #Network #OpenSource

http

2025-12-08

Tossed another 8TB disk in to expand some more. Other than my smr mishap I'm starting to love raidz expansion.

#freebsd #linux #zfs

zpool status showing raidz2 expansion
2025-12-08

@JdeBP That laptop has some problems. I was able to get FreeBSD installed after changing it from #BIOS to #UEFI (with hybrid CSM). I spent some time with it today on my lunch break and changed it to UEFI with native CSM. There was no change. But, what I did find is I am unable to update the BIOS (even after the recent recommended downgrade from HP) because....the NIC isn't recognized. That is new. So now the laptop is currently stuck on a firmware from 2018 with apparently no #Ethernet. Stupendous!

I set it aside and dug out another spare laptop. This time my mother's old #Toshiba #Satellite that was running #Windows7. It's a Core i7 with 8GB with an ancient 5,400 RPM disk. Should be fine. Well, I ran into problems after choosing the auto #ZFS option. After that, the installer could not proceed. Even after rebooting the laptop and restarting the installer I was unable to do anything to the disk, including delete slices or choose auto UFS. I have never experienced this before with FreeBSD and I am comically puzzled.

I did have a spare FreeBSD 14.1 DVD laying around and so far the installer is working just fine, even if it is moving slower than molasses down a freezer wall.

#tech #problems #wtf

Practical ZFS - DiscussionsPracticalZFS@feedsin.space
2025-12-08

Almalinux 10 missing mbuffer rpm

Hi,
I’m trying to install sanoid on a new Almalinux 10 box. It seems that the packages mbuffer and mhash were removed from the epel repository. I don’t have a RHEL or Rocky installation, but I guess this will be the case also for those distros. I didn’t find any mention of mhash*, other than in the INSTALL.md, but mbuffer is another story. My understanding, though, is that is not a hard dependency and syncoid should work without it. Is it right?

*I just rgrepped, the repo for mhash, non exactly a deep dive

1 post - 1 participant

Read full topic


zfs

Dan Langilledvl@bsd.network
2025-12-08

I've said this before. Good software is software you forget you're running.
Today, a #ZFS zpool is up to 87% capacity. What?
Oh yes, my full backups ran yesterday (first Sunday of the month).
This issue is partly why I'm moving-to-bigger zpools, perhaps done this week.

#Bacula

Darren Moffatdarrenmoffat
2025-12-08
vermadenvermaden
2025-12-08

Latest 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝗡𝗲𝘄𝘀 - 𝟮𝟬𝟮𝟱/𝟭𝟮/𝟬𝟴 (Valuable News - 2025/12/08) available.

vermaden.wordpress.com/2025/12

Past releases: vermaden.wordpress.com/news/

2025-12-08

Latest 𝗩𝗮𝗹𝘂𝗮𝗯𝗹𝗲 𝗡𝗲𝘄𝘀 - 𝟮𝟬𝟮𝟱/𝟭𝟮/𝟬𝟴 (Valuable News - 2025/12/08) available.

vermaden.wordpress.com/2025/12

Past releases: vermaden.wordpress.com/news/

#verblog #vernews #news #bsd #freebsd #openbsd #netbsd #linux #unix #zfs #opnsense #ghostbsd #solaris #vermadenday

2025-12-08
So I ran some commands, without reading the commands on both a Debian and a FreeBSD system, that effed up my permissions and then I accidentally copied over the (backup) Debian /home/thedaemon/ instead of the FreeBSD one onto FreeBSD. Boy am I glad I took a ZFS snapshot a few days ago. This is a lifesaver for overwriting/doing admin work while high! #FreeBSD #ZFS

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst