Page MenuHomeFreeBSD

unionfs: allow vnode lock to be held shared during VOP_OPEN
ClosedPublic

Authored by jah on Jan 3 2022, 2:56 PM.
Tags
None
Referenced Files
Unknown Object (File)
Feb 16 2024, 6:08 AM
Unknown Object (File)
Feb 9 2024, 6:41 PM
Unknown Object (File)
Jan 25 2024, 8:14 AM
Unknown Object (File)
Jan 25 2024, 8:14 AM
Unknown Object (File)
Dec 15 2023, 12:09 AM
Unknown Object (File)
Dec 12 2023, 12:30 PM
Unknown Object (File)
Nov 30 2023, 9:09 PM
Unknown Object (File)
Oct 23 2023, 10:20 AM
Subscribers

Details

Summary

do_execve() will hold the vnode lock shared when it calls VOP_OPEN(),
but unionfs_open() requires the lock to be held exclusive to
correctly synchronize node status updates. This requirement is
asserted in unionfs_get_node_status().

Change unionfs_open() to temporarily upgrade the lock as is already
done in unionfs_close(). Also fix a related issue in which
unionfs_lock() can incorrectly add LK_NOWAIT during a downgrade
operation, which trips a lockmgr assertion. Finally, add some
comments to clarify the expected behavior in cases where the vnode
may be doomed while its lock is temporarily dropped.

unionfs: add stress2 scenarios for write references

Add some test cases, based on the existing nullfs10 scenario, to
ensure that unionfs write references are propagated between the
unionfs and underlying vnodes, including unionfs copy-on-write
operations

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

jah requested review of this revision.Jan 3 2022, 2:56 PM
sys/fs/unionfs/union_subr.c
528–530

This is quite clear from the assert. I do not see much use for repeating code in comment, verbatim.

sys/fs/unionfs/union_vnops.c
540

The vnode could become reclaimed after this dance. Then the argument about VOP_OPEN() does not hold, because you call e.g. unionfs_get_node_status(), which operates on the vnode fs-specific data.

The three new tests are missing the x bit.

sys/fs/unionfs/union_vnops.c
540

Wow, you'd think I would've noticed all the places we use unp when I wrote the original change. Also it seems better to check for a doomed vnode here regardless of what the code below does, as a defense against future breakage.

Check for reclaimed vnodes in various places, chmod +x unionfs1[0-2].sh

This revision is now accepted and ready to land.Jan 4 2022, 11:32 PM
sys/fs/unionfs/union_vnops.c
1594

Would it be worthwhile to factor out this upgrade code into a subroutine?

Factor out lock upgrade/downgrade

This revision now requires review to proceed.Jan 6 2022, 6:17 PM
sys/fs/unionfs/union_vnops.c
1594

I think there's room for cleanup here. See what you think of the update I just posted.

I'm sure you've observed this already, but this behaviour of potentially dropping the vnode lock seems dubious: the vnode interface doesn't say anything about whether a particular VOP implementation is allowed to do that, and callers might assume that it doesn't happen. Moreover, lock upgrades will often succeed, so problems that arise from dropping the lock will be rare and hard to debug. Maybe unionfs can use an internal lock to provide mutual exclusion without having to upgrade, but that is probably a simplistic idea, or maybe this is actually not much of a problem in practice.

Seems good to me, but I didn't really look at the tests.

sys/fs/unionfs/union_vnops.c
1594

Thanks, I think this is nicer.

This revision is now accepted and ready to land.Jan 6 2022, 8:08 PM

I'm sure you've observed this already, but this behaviour of potentially dropping the vnode lock seems dubious: the vnode interface doesn't say anything about whether a particular VOP implementation is allowed to do that, and callers might assume that it doesn't happen. Moreover, lock upgrades will often succeed, so problems that arise from dropping the lock will be rare and hard to debug. Maybe unionfs can use an internal lock to provide mutual exclusion without having to upgrade, but that is probably a simplistic idea, or maybe this is actually not much of a problem in practice.

Seems good to me, but I didn't really look at the tests.

I've certainly noticed the same thing, but I also think it's unavoidable that there will be a few cases in which unionfs needs to temporarily drop and re-acquire vnode locks. There will always be situations in which both the upper and lower vnodes will need to be locked at some point, requiring either the moral equivalent of vn_lock_pair() or the locks to be dropped and acquired sequentially. That said, I suspect that just as with unionfs locking, unionfs node status management can be done better than it is today, with most places simply using the upper vnode (if available) rather than consulting the node status.