Page MenuHomeFreeBSD

Remove ufs-specific mountroot code that waited for devices.
AcceptedPublic

Authored by ian on Mar 11 2018, 12:05 AM.
Tags
None
Referenced Files
Unknown Object (File)
Dec 23 2023, 2:49 AM
Unknown Object (File)
Oct 29 2023, 3:01 PM
Unknown Object (File)
Jun 26 2023, 10:57 PM
Unknown Object (File)
Jun 15 2023, 6:54 PM
Unknown Object (File)
Jan 13 2023, 5:00 PM
Unknown Object (File)
Nov 29 2022, 12:56 PM
Subscribers
None

Details

Reviewers
trasz
imp
Summary

The existing code that waits for root filesystem devices to be available works only for a few filesystems, primarily ufs, because it is based on polling for a device name to appear in /dev, after waiting for root mount holds to be released. In r330745 a new loop was added that just retries the kernel_mount() call until it either succeeds or the timeout expires. That makes the detection filesystem-centric rather than device-centric, so it works for filesystems such as zfs which aren't tied to a specific device name.

Given that the new fs-centric retry loop works fine for all filesystem types, the old code that did device name lookups and waited for root mount holds to be released is now obsolete, with one exception: the user can set vfs.root_mount_always_wait=1 to request that the mountroot process always wait for all root mount holds to be released, even if the filesystem needed to mount root is already available. Some users do that to ensure all usb devices are ready before any rc scripts run.

So this change removes all the old code related to doing name lookups after sometimes maybe waiting for root holds to be released, and simply waits for all root holds, but only if the user requested that.

This does not remove the root hold system itself, just the redundant waiting related to it that was obsoleted by r330745.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Skipped
Unit
Tests Skipped
Build Status
Buildable 15482

Event Timeline

Don't know if this has gone in yet, but it seems sane to me. We don't need all that extra stuff anymore.

This revision is now accepted and ready to land.Apr 2 2018, 12:05 AM

My only worry is this: what if we had a zpool with devices that require different time to go online, and we mount rootfs while one of them is still offline? Wouldn't this result in a degraded root pool?

My only worry is this: what if we had a zpool with devices that require different time to go online, and we mount rootfs while one of them is still offline? Wouldn't this result in a degraded root pool?

I have no idea what that means. We ask zfs if the rootfs is ready to use... are you saying that it might say Yes when the correct answer is No, and that should be fixed outside of the zfs code?

To be honest I'm not even sure if it _is_ an issue. It's just something to think about, and preferably test: what happens when you try to boot ZFS mirror with one device missing.

And no, I'm not suggesting it shouldn't be fixed (if neccessary) in ZFS - I'm just saying it would be nice to avoid introducing a regression.