I know not everyone likes the idea of automatically decrypting the drive, but that's another debate. I have this set up on a separate machine (with ext4) and it's nice. Systemd recently got support for TPM so can automatically decrypt devices. If you want hibernation to work, you need a separate swap, encrypted separately, so that's two passwords to type in.
So you have to type in your password every time you boot up the computer.
Issues go from benign: ultra-high CPU usage so my 980 PRO ssd is slower than spinning rust to the module not compiling at all.Īnother drawback is that booting with systemd in initramfs doesn't support zfs native encryption. These are not always compatible with the latest stable OpenZFS release. In my case, Arch usually has up-to-date kernels. I will also give a few drawbacks / points in favor of using LUKS.
Just very annoying and tedious.Īs others, I'm running ZFS with native encryption on my daily driver Linux box (Arch).Īs others have said, I like the benefits, which are easy send/receive, etc (but mind the bugs some other commenter mentioned). send/recv was the whole reason I went with native encryption instead of LUKS in the first place.Īgain, no data corruption so far (if you don't count the snapshots that get lost, which I don't, because I have so many of them). What's most annoying is that this breaks ZFS send/recv until I intervene. To clear them I have to manually intervene by deleting the corrupt snapshots, restart the machine, then scrub. I haven't lost any data yet but this issue is annoying because I get spooky alerts from ZFS warning me about "potential data corruption".
I take a lot of snapshots so this occurs weekly. It happens roughly 1/1000 snapshots in my case. The issue appears to be snapshots getting corrupted (or maybe failing to create?) occasionally. The issue occurs when using ZFS Native Encryption + (NVMe?) SSDs. There is an outstanding issue that I've encountered on two separate machines running in this setup. I haven't run ZFS + LUKS but I'm running ZFS Native Encryption. I tried unsuccessfully to tackle this in #11300.Įdit: If you have critical data lost due to this case I could help you recover them."Īnd there's this comment which has a lot more pointers to recent problems.
This leads to the subsequent mount failure due a checksum error when verifying the local mac. The technical detail from that thread: "This happens because when sending raw encrypted datasets the userspace accounting is present So that means you should test your setup precisely. That said, it is not as mature as I'd like, and there is at least one very big issue to know about: I recently sent an encrypted FS with zfs send/receive, and the received copy ended up being unreadable. So I'm in favor of native ZFS encryption due to its usability. Perhaps there is a better way to tune the storage setup to exactly what you want, but with ZFS you aren't seduced into getting creative with the architecture. Most other solutions require a lot of thought and planning. You get to know the FS, play with it, and then off you go. To me, a big benefit of ZFS' rampant layering violation is that it shrinks the surface of weird interactions and edge cases to learn about. Caveat emptor: I've run ZFS on a two-digit number of machines for >10 years.