Just got my new laptop! Primary development OS debate...Linux vs FreeBSD

@Syonyk zfs is weird for root? Don’t see that, been used for quite some time on BSDs just fine. In fact it has lots of benefits. Things like snapshots before upgrades/config changes/etc. So you can simply rollback easily to a known good working version. As long as you can boot the system to single-user or with a live image, you can roll it back. Easy, built-in quota support for home dirs and such. Easy to have a zvol per user as their home dir, no muss, no fuss. /var/log can be it’s own, size limited, so it doesn’t take over your entire root fs in the event of something over-logging, likewise with /tmp. So nice things like that.

For FreeBSD, it’s a painless process, and well supported. Initially you had to do some manual magic, but these days it’s just one other install option. I suppose it was too much to hope that Linux distros had moved on to allowing that as well. Ah well. :frowning:

@bombcar yeah, thanks for those, but it’s not that ephemeral. Already bought the license and have had it sitting around waiting for the laptop.

As for swap resize…it’s not the swap resize itself that’s the problem, it’s resizing a zfs pool. As this post says, trying to resize smaller is, in theory, maybe possibly capable. Not really though. It’s 𝘽𝙡𝙤𝙤𝙙𝙮 𝙅𝙤𝙝𝙣𝙨𝙤𝙣 𝙨𝙩𝙪𝙥𝙞𝙙. So can’t resize smaller, only larger.

In theory I can use a zvol for swap, but as pointed out by a ticket at the bottom of that, systems with high memory pressure can actually lock up. So…I’m gonna go with no.

As for Ubuntu, with the install process it doesn’t give me any chance to do anything other than choose the ZFS root method, and once I confirm it all, it just does it all in one shot, no chance for me to do anything in the middle before it starts actually copying stuff.

I agree that ZFS root almost makes better sense than ZFS for data partitions - mainly because of the ability to snap the entire system, screw it up, and then revert.

@Drizzt321 - have you tried installing via the “advanced” options or install the Ubuntu 20.04 server and then add the desktop packages on top?

Or could you put a Windows partition of X size, install Ubuntu (does it leave Windows alone?) and then resize windows down and increase the zpool? Then you can add a separate (second) swap thing.

I was able to do it with Gentoo but that’s very-very manual:

tank ~ # zfs get compressratio
NAME                                               PROPERTY       VALUE  SOURCE
root                                               compressratio  1.05x  -
root/ROOT                                          compressratio  1.05x  -
root/ROOT/boot                                     compressratio  1.29x  -
root/ROOT/gentoo                                   compressratio  1.05x  -
root/SWAP                                          compressratio  4.65x  -

I’m thinking I’m just going to run Win as a VM, when I need it. Back when I was thinking I might dual boot, but Syonyk’s thought above about being able to snapshot and rollback something that I have to install (for some Reasons), then I can just revert back and ‘uninstall’ it cleanly and 100% makes a lot of sense, for my need for Win. If I was going to actually game on it, or needed real GPU Compute or the like, I might dual boot.

But that’s an interesting thought, install Win on a small partition, install using the rest of the disk with ZFS root, and then nuking the Win partition and using that as a 2nd swap. That is, if the Ubuntu installer will detect Win when doing ZFS root install and not nuke it. Not sure.

That Advanced install you’re talking about, that’s what you do for the Desktop version as well, to get ZFS Root install. Unless the server ZFS Root allows for selecting swap size.

Although…given a laptop with iGPU and a dGPU… could I just use PCIe passthrough to pass through the dGPU to a Win10 VM and get nearly native gaming performance? Hmmm… thought for the long term future. Or for a future desktop re-build. 1 cheaper, older GPU for Linux stuff, and a higher end gaming GPU for Win VM for gaming. I digress, something for a future test.

FYI that swap dataset is using a ZFS Zvol (e.g. dataset) swap, so it might be subject to the mentioned bug above.

Interesting bug - I should probably just not have swap at all, or swap somewhere else on this system (it has never once used any swap)

Maybe I’m thinking of the wrong thing. zfs is the “You toss a bunch of disks into the system, and it pools them together into storage, some redundant, some not, and God help you if something goes wrong with it” system?

I mean I guess you can use it that way - ZFS is the “checksum everything and then checksum some other things while you’re at it so bitrot never gets you” filesystem.

Does it store enough redundant information to rebuild corrupted blocks? I’ve gotten bitten in the past (total catastrophic data loss) by ReiserFS (don’t store Reiser images on a Reiser system), and btrfs on a Pi over USB ended up totally unusable and unrecoverable as well. So I’ve just stuck to ext4 lately.

Yea, didn’t read to closely on the bug, just seeing it made me nope out of that thought. And it’s actually a really nice thought! Let’s you change around swap size pretty easily at will, depending on if you need more or less based on how much space you have available. Too bad it’s not working well currently.

Reading through the ticket, love this quote from this comment

I disagree. Administrators should not be prevented from shooting themselves in the foot, so long as the warnings are clear.


Allowing people to do stupid things also allows them to do clever things.

ZFS can do “bare drive” in which case you at least get to know you’ve lost data (it refuses to return blocks that fail checksum) - this would apply to all the “no parity” levels. Everyone recommends at least a mirror, in which case a bad block is noticed and repaid from another mirror. It can do higher levels like Z2, Z3, etc which roughly correlate to RAID 5, RAID 6, and so on.

It can’t easily do expansion.

So it’s only after a block is confirmed as written (remember it’s Copy on Write) that the metadata is updated to point to the newly written block for that file. It’s not perfect, of course, but it is resilient.

After some brief searching, this brief article seems to show the levels of care ZFS takes to try and keep the pool at least readable/intact, even if there are some bits that have been corrupted or individual data blocks are corrupted for whatever reason.

Not exactly a counter-point, but an example of various intersections of “is it ZFS, is it underlying hardware, is it underlying virtualized setup?”, I found this experience, with the author following up on here, which has some decent comments, and this reddit thread which has a lot more insightful comments. In short, it was a single-disk ZFS pool, so it could detect corruption, but could not repair corruption. And complicated by the fact that it was virtualized machine on a remote provider. The reddit thread & comments provides a lot more of interest really.

EDIT: Following up from @bombcar, I think what you’re asking is “how does ZFS not hose the entire FS from a single or small number of corrupted blocks, and I can at least read everything else” vs “there’s a corrupted block that hoses this file, that’s OK, it happens, if I only had 1 copy such is life, but I can still access everything else fine”.

The short article I mention shows #1 fairly well, while the travails of the author I linked to shows #2.

Actually ZFS pool can do expansion fine, it’s just a bit more complicated depending on the setup. Single disk, if it has free space contiguous, I believe you can just resize the partition. I’d want to check on that.

If you’re talking RAIDZ/2/3, you replace a device with a larger one, resliver, and then after you’ve done that with all of the devices you now have access to that greater total storage. Might need to issue a few commands.

I believe you can simply add additional ZFS volumes (not zvol, I don’t think. And zvol != dataset, need to correct myself from above) to a pool and get additional storage, but then I believe ZFS stripes across all volumes within the pool. I think it can work fine, but it’ll be lopsided and generally isn’t a preferred way to go about it. But then again, LVM doing something similar also is a bad idea AFAIK. If it can even be done.

That RAIDZ thing is what people do NOT expect - they want to have a 4 disk RAIDZ1 (three disks usable capacity) and add a fifth disk and now have a 5 disk RAIDZ1 - which you cannot do without backing up and recreating the entire pool.

Of course people do go on about how that’s silly why would anyone ever want to do that simply replace your disks one by one and resilver each time, ignore that it takes forever, etc, etc. The moment ZFS can do dynamic pool expansion similar to a hardware RAID controller they’ll suddenly love it.

(Personally I just do ZFS mirrors and stripe the mirrors together - it’s kinda like RAID10 but not really and has significant performance improvements over RAIDZ.)

You CAN tell ZFS that it should store multiple copies of USER DATA (it already does this for metadata) and then you CAN actually survive bad blocks on a single device zpool. https://docs.oracle.com/cd/E19253-01/819-5461/gevpg/index.html

It’s not backup and it’s not RAID but it’s … something I guess.

It’s basically mirroring (for user data) without having to go through the hassle of multiple partitions for a single device. Like, say, ZFS root on a single device and you want a segment of your data files to be corruption protected. Doesn’t help if the devices goes totally fubar, but can protected against bit-flips and individual bad blocks and the like.

And I know RAIDZn disk addition has been talked about for a while, and is in the works. I don’t think it’s done yet, quick search found this as alpha code, but is under active development.

Online capacity expansion is one of those things that sounds very enterprise but in fact the ONLY people who realistically ever USE it are home users. I worked with a custom kernel driver that could do multiple disk parity back before md even supported RAID 6 and we offered any combination of parity out of 12 drives and could convert between them - nobody ever used it :smile:.

Though being able to tell ZFS something like “here’s a zpool of 12 drives, start with a 12 drive mirror and slowly convert until you’re at 9 drives of capacity and 3 drives of parity and then report that you’re full” would be an interesting use case. More parity when you don’t need the space slowly moving to less parity as you need more space.

So good news/not so good news (neutral?). Got Neon installed, apparently fine, via the Ubuntu 20.04 on ZFS Root instructions and it boots fine. However I get no GUI Login prompt, and I have to switch to terminal 2 and login, then run startx to get to the Plasma desktop. Looks like they’re using SDDM as a desktop manager/startup. It’s set to enabled, but even after reboot I still don’t have a GUI login. I have a Stack post.

Do you mean you get no GUI at all (simply a blank terminal login) or do you get a GUI but no way to login and use it?

Terminal 1 had all the boot messages, no login prompt. Rest of the vtty had normal terminal logins. And once logged in, startx worked as per usual to get me into Plasma.

So this is interesting. When I boot with nomodeset, like I have to with the Live USB, I get SDDM login prompt as I expect. Very interesting. Strange. I also found these amdgpu utils, and they can’t detect that I have the amdgpu driver installed. Yet it’s in the kernel.

Something tells me this should lead me towards the root issue. I hope.

# uname -a
Linux darklaptop 5.4.0-67-generic #75-Ubuntu SMP Fri Feb 19 18:03:38 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux

# lsmod | grep amdgpu
amdgpu               4579328  0
amd_iommu_v2           20480  1 amdgpu
gpu_sched              32768  1 amdgpu
i2c_algo_bit           16384  1 amdgpu
ttm                   106496  1 amdgpu
drm_kms_helper        184320  1 amdgpu
drm                   491520  4 gpu_sched,drm_kms_helper,amdgpu,ttm

EDIT: The amdgpu-tools thing is a red herring, apparently the debian repos only has an old version. After getting the new, fresh version installed, it does see the GPU OK. NM.

sigh OK, all is well now. Found this thread giving the correct kernel package to use to upgrade. Wish there was one newer than 5.8, but oh well. At least that has the newer drivers for my GPU.

Also, the Zsys auto-snapshot/revert worked great, when I had accidentally installed, or installed the wrong way, or something a different kernel. Did something wrong. But a simple grub “revert update made on X at Y” worked AWESOME.