Page MenuHomeFreeBSD
Feed Advanced Search

Today

jah added inline comments to D44788: unionfs_rename: fix numerous locking issues.
Fri, Apr 19, 2:50 AM
jah updated the diff for D44788: unionfs_rename: fix numerous locking issues.
  • Fix fdvp lock recursion during file copy-up; use ERELOOKUP to simplify
Fri, Apr 19, 2:44 AM

Wed, Apr 17

jah added inline comments to D44788: unionfs_rename: fix numerous locking issues.
Wed, Apr 17, 9:14 PM

Tue, Apr 16

jah added inline comments to D44788: unionfs_rename: fix numerous locking issues.
Tue, Apr 16, 2:33 AM

Mon, Apr 15

jah added a comment to D44788: unionfs_rename: fix numerous locking issues.

The main problem that I see with these changes is that they lead to dropping support for FSes with non-recursive locking (see some of the inline comments). I think there are also some minor problems in the locking/relookup logic (see the inline comments as well).

Besides, unionfs_rename() still has numerous problems beyond locking, and I'm wondering if it's worth it for everybody to pursue into this direction before I've started the unionfs project, which will include an overhaul of its fundamentals. It will probably take less time to mostly rewrite it from there then to try to fix all these deficiencies, especially given that the most fundamental ones are not readily visible even with runs of stress2.

In D44788#1020876, @jah wrote:
In D44788#1020860, @kib wrote:

Could you try to (greatly) simplify unionfs rename by using ERELOOKUP? For instance, it can be split into two essentially independent cases: 1. need to copy fdvp from lower to upper (and return ERELOOKUP) 2. Just directly call VOP_RENAME() on upper if copy is not needed.

Splitting the need to copy-up before calling VOP_RENAME() is a necessity, independently of ERELOOKUP, to be able to restart/cancel an operation that is interrupted (by, e.g., a system crash). With ERELOOKUP, part of the code should go into or be called from unionfs_lookup() instead. I doubt this will simplify things per se, i.e., more than extracting the code to a helper function would do. Later on, as placeholders are implemented, no such copy should even be necessary, which makes apparent that unionfs_lookup() is not a good place to make that decision/undertake that action.

I think that's a good idea; ERELOOKUP is probably what we really want to use in most (all?) of the cases in which we currently use unionfs_relookup_*. There will be some penalty for making the vfs_syscall layer re-run the entire lookup instead of re-running only the last level, but those cases are never on the fast path anyway.

ERELOOKUP restarts the lookup at the latest reached directory, so not sure which penalty you are talking about.

Mon, Apr 15, 8:36 PM

Sun, Apr 14

jah added a comment to D44788: unionfs_rename: fix numerous locking issues.
In D44788#1020860, @kib wrote:

Could you try to (greatly) simplify unionfs rename by using ERELOOKUP? For instance, it can be split into two essentially independent cases: 1. need to copy fdvp from lower to upper (and return ERELOOKUP) 2. Just directly call VOP_RENAME() on upper if copy is not needed.

Sun, Apr 14, 3:00 PM

Sat, Apr 13

jah added inline comments to D44788: unionfs_rename: fix numerous locking issues.
Sat, Apr 13, 10:44 PM
jah requested review of D44788: unionfs_rename: fix numerous locking issues.
Sat, Apr 13, 10:37 PM

Tue, Apr 9

jah committed rGb18029bc59d2: unionfs_lookup(): fix wild accesses to vnode private data (authored by jah).
unionfs_lookup(): fix wild accesses to vnode private data
Tue, Apr 9, 10:38 PM
jah closed D44601: unionfs_lookup(): fix wild accesses to vnode private data.
Tue, Apr 9, 10:38 PM

Sun, Apr 7

jah committed rG7b86d14bfccb: unionfs: implement VOP_UNP_* and remove special VSOCK vnode handling (authored by jah).
unionfs: implement VOP_UNP_* and remove special VSOCK vnode handling
Sun, Apr 7, 12:32 AM

Wed, Apr 3

jah added a comment to D44601: unionfs_lookup(): fix wild accesses to vnode private data.

These changes plug holes indeed.

Side note: It's likely that I'll rewrite the whole lookup code at some point, so I'll have to test again for races. The problem with this kind of bugs is that they are triggered only by rare races. We already have stress2 which is great, but also relies on "chance". This makes me think that perhaps we could have some more systematic framework triggering vnode dooming, let's say, at unlock. I'll probably explore that at some point.

Wed, Apr 3, 1:48 PM

Tue, Apr 2

jah requested review of D44601: unionfs_lookup(): fix wild accesses to vnode private data.
Tue, Apr 2, 10:44 PM

Sun, Mar 24

jah committed rG61d9b0cb38bb: uipc_bindat(): Explicitly specify exclusive locking for the new vnode (authored by jah).
uipc_bindat(): Explicitly specify exclusive locking for the new vnode
Sun, Mar 24, 3:06 AM
jah committed rG6d118b958612: unionfs: accommodate underlying FS calls that may re-lock (authored by jah).
unionfs: accommodate underlying FS calls that may re-lock
Sun, Mar 24, 3:05 AM
jah committed rGb09b120818a8: vn_lock_pair(): allow lkflags1/lkflags2 to be 0 if vp1/vp2 is NULL (authored by jah).
vn_lock_pair(): allow lkflags1/lkflags2 to be 0 if vp1/vp2 is NULL
Sun, Mar 24, 3:05 AM
jah committed rGeee6217b40df: unionfs: implement VOP_UNP_* and remove special VSOCK vnode handling (authored by jah).
unionfs: implement VOP_UNP_* and remove special VSOCK vnode handling
Sun, Mar 24, 2:13 AM
jah closed D44288: unionfs: implement VOP_UNP_* and remove special VSOCK vnode handling.
Sun, Mar 24, 2:12 AM

Mar 16 2024

jah updated the diff for D44288: unionfs: implement VOP_UNP_* and remove special VSOCK vnode handling.

Code review feedback, also remove a nonsensical check from unionfs_link()

Mar 16 2024, 3:49 PM

Mar 10 2024

jah requested review of D44288: unionfs: implement VOP_UNP_* and remove special VSOCK vnode handling.
Mar 10 2024, 2:10 AM
jah committed rG6c8ded001540: unionfs: accommodate underlying FS calls that may re-lock (authored by jah).
unionfs: accommodate underlying FS calls that may re-lock
Mar 10 2024, 1:59 AM
jah closed D44076: unionfs: accommodate underlying FS calls that may re-lock.
Mar 10 2024, 1:59 AM
jah closed D44047: uipc_bindat(): Explicitly specify exclusive locking for the new vnode.
Mar 10 2024, 1:53 AM
jah committed rGd56c175ac935: uipc_bindat(): Explicitly specify exclusive locking for the new vnode (authored by jah).
uipc_bindat(): Explicitly specify exclusive locking for the new vnode
Mar 10 2024, 1:53 AM
jah committed rGfa26f46dc29f: vn_lock_pair(): allow lkflags1/lkflags2 to be 0 if vp1/vp2 is NULL (authored by jah).
vn_lock_pair(): allow lkflags1/lkflags2 to be 0 if vp1/vp2 is NULL
Mar 10 2024, 1:47 AM
jah closed D44046: vn_lock_pair(): only assert on lkflags1/lkflags2 vp1/vp2 is non-NULL.
Mar 10 2024, 1:46 AM

Mar 4 2024

jah committed rG5e806288f0c7: unionfs: cache upper/lower mount objects (authored by jah).
unionfs: cache upper/lower mount objects
Mar 4 2024, 6:52 PM
jah committed rG9c530578757b: unionfs: upgrade the vnode lock during fsync() if necessary (authored by jah).
unionfs: upgrade the vnode lock during fsync() if necessary
Mar 4 2024, 6:52 PM
jah committed rGc18e6a5a5c63: unionfs: work around underlying FS failing to respect cn_namelen (authored by jah).
unionfs: work around underlying FS failing to respect cn_namelen
Mar 4 2024, 6:52 PM
jah committed rGd0bb255d1fcb: VFS: update VOP_FSYNC() debug check to reflect actual locking policy (authored by jah).
VFS: update VOP_FSYNC() debug check to reflect actual locking policy
Mar 4 2024, 6:52 PM

Feb 29 2024

jah updated the diff for D44076: unionfs: accommodate underlying FS calls that may re-lock.

Incorporate code review feedback from olce@

Feb 29 2024, 5:23 AM

Feb 24 2024

jah added a comment to D44076: unionfs: accommodate underlying FS calls that may re-lock.

This basically amounts to a generalized version of the mkdir()-specific fix I made last year in commit 93fe61afde72e6841251ea43551631c30556032d (of course in that commit I also inadvertently added a potential v_usecount ref leak on the new vnode). Or I guess it can be thought of as a tailored version of null_bypass().

Feb 24 2024, 11:59 PM
jah requested review of D44076: unionfs: accommodate underlying FS calls that may re-lock.
Feb 24 2024, 11:57 PM

Feb 23 2024

jah updated the diff for D44047: uipc_bindat(): Explicitly specify exclusive locking for the new vnode.

Only clear LK_SHARED

Feb 23 2024, 11:39 PM
jah updated the diff for D44046: vn_lock_pair(): only assert on lkflags1/lkflags2 vp1/vp2 is non-NULL.

Only allow lkflags to be 0 when the corresponding vnode is NULL

Feb 23 2024, 11:39 PM
jah added a comment to D44046: vn_lock_pair(): only assert on lkflags1/lkflags2 vp1/vp2 is non-NULL.
In D44046#1004894, @kib wrote:
In D44046#1004891, @jah wrote:
In D44046#1004890, @kib wrote:

So might be just allow zero flags if corresponding vp is NULL?

Sure, we could do that, but I'm curious: is there some reason why we should care what the lockflags are if there is no vnode to lock? What I have here seems more straightforward than making specific allowances for NULL vnodes.

I am about the reliable checking for the API contracts. Assume that some function calls vn_lock_pair() with externally-specified flags, and corresponding vp could be NULL sometimes. I want such calls to always have correct flags, esp. if vp != NULL is rare or could not be easily checked by normal testing.

Feb 23 2024, 8:58 PM
jah added a comment to D44046: vn_lock_pair(): only assert on lkflags1/lkflags2 vp1/vp2 is non-NULL.
In D44046#1004890, @kib wrote:

So might be just allow zero flags if corresponding vp is NULL?

Feb 23 2024, 6:09 PM
jah added a comment to D44046: vn_lock_pair(): only assert on lkflags1/lkflags2 vp1/vp2 is non-NULL.
In D44046#1004867, @kib wrote:

May I ask why? This allows to pass any flags for null vnodes.

Feb 23 2024, 6:02 PM
jah added a comment to D44047: uipc_bindat(): Explicitly specify exclusive locking for the new vnode.

This is needed for upcoming work to adopt VOP_UNP_* in unionfs.

Feb 23 2024, 5:56 PM
jah requested review of D44047: uipc_bindat(): Explicitly specify exclusive locking for the new vnode.
Feb 23 2024, 5:55 PM
jah requested review of D44046: vn_lock_pair(): only assert on lkflags1/lkflags2 vp1/vp2 is non-NULL.
Feb 23 2024, 5:50 PM

Feb 18 2024

jah closed D43818: unionfs: workaround underlying FS failing to respect cn_namelen.
Feb 18 2024, 3:20 PM
jah committed rGa2ddbe019d51: unionfs: work around underlying FS failing to respect cn_namelen (authored by jah).
unionfs: work around underlying FS failing to respect cn_namelen
Feb 18 2024, 3:20 PM
jah closed D43817: unionfs: upgrade the vnode lock during fsync() if necessary.
Feb 18 2024, 3:18 PM
jah committed rG2656fc29be8b: unionfs: upgrade the vnode lock during fsync() if necessary (authored by jah).
unionfs: upgrade the vnode lock during fsync() if necessary
Feb 18 2024, 3:18 PM
jah committed rG9530182e371d: VFS: update VOP_FSYNC() debug check to reflect actual locking policy (authored by jah).
VFS: update VOP_FSYNC() debug check to reflect actual locking policy
Feb 18 2024, 3:17 PM
jah closed D43816: VFS: update VOP_FSYNC() debug check to reflect actual locking policy.
Feb 18 2024, 3:16 PM
jah committed rGcc3ec9f75978: unionfs: cache upper/lower mount objects (authored by jah).
unionfs: cache upper/lower mount objects
Feb 18 2024, 3:15 PM
jah closed D43815: unionfs: cache upper/lower mount objects.
Feb 18 2024, 3:15 PM

Feb 13 2024

jah added a comment to D43815: unionfs: cache upper/lower mount objects.
In D43815#1000687, @jah wrote:
In D43815#1000340, @jah wrote:

I don't think it can. Given the first point above, there can't be any unmount of some layer (even forced) until the unionfs mount on top is unmounted. As the layers' root vnodes are vrefed(), they can't become doomed (since unmount of their own FS is prevented), and consequently their v_mount is never modified (barring the ZFS rollback case). This is independent of holding (or not) any vnode lock.

Which doesn't say that they aren't any problems of the sort that you're reporting in unionfs, it's just a different matter.

That's not true; vref() does nothing to prevent a forced unmount from dooming the vnode, only holding its lock does this. As such, if the lock needs to be transiently dropped for some reason and the timing is sufficiently unfortunate, the concurrent recursive forced unmount can first unmount unionfs (dooming the unionfs vnode) and then the base FS (dooming the lower/upper vnode). The held references prevent the vnodes from being recycled (but not doomed), but even this isn't foolproof: for example, in the course of being doomed, the unionfs vnode will drop its references on the lower/upper vnodes, at which point they may become unreferenced unless additional action is taken. Whatever caller invoked the unionfs VOP will of course still hold a reference on the unionfs vnode, but this does not automatically guarantee that references will be held on the underlying vnodes for the duration of the call, due to the aforementioned scenario.

There is a misunderstanding. I'm very well aware of what you are saying, as you should know. But this is not my point, which concerns the sentence "Use of [vnode]->v_mount is unsafe in the presence of a concurrent forced unmount." in the context of the current change. The bulk of the latter is modifications of unionfs_vfsops.c, which contains VFS operations, and not vnode ones. There are no vnodes involved there, except accessing the layers' root ones. And what I'm saying, and that I proved above is that v_mount on these, again in the context of a VFS operation, cannot be NULL because of a force unmount (if you disagree, then please show where you think there is a flaw in the reasoning).

Actually the assertion about VFS operations isn't entirely true either (mostly, but not entirely); see the vfs_unbusy() dance we do in unionfs_quotactl().
But saying this makes me realize I actually need to bring back the atomic_load there (albeit the load should be of ump->um_uppermp now).

Otherwise your assertion should be correct, and indeed I doubt the two read-only VOPs in question would have these locking issues in practice.
I think the source of the misunderstanding here is that I just didn't word the commit message very well. Really what I meant there is what I said in a previous comment here: If we need to cache the mount objects anyway, it's better to use them everywhere to avoid the pitfalls of potentially accessing ->v_mount when it's unsafe to do so.

Feb 13 2024, 1:45 PM
jah updated the diff for D43815: unionfs: cache upper/lower mount objects.

Restore volatile load from ump in quotactl()

Feb 13 2024, 12:30 PM
jah added a comment to D43815: unionfs: cache upper/lower mount objects.
In D43815#1000340, @jah wrote:

I don't think it can. Given the first point above, there can't be any unmount of some layer (even forced) until the unionfs mount on top is unmounted. As the layers' root vnodes are vrefed(), they can't become doomed (since unmount of their own FS is prevented), and consequently their v_mount is never modified (barring the ZFS rollback case). This is independent of holding (or not) any vnode lock.

Which doesn't say that they aren't any problems of the sort that you're reporting in unionfs, it's just a different matter.

That's not true; vref() does nothing to prevent a forced unmount from dooming the vnode, only holding its lock does this. As such, if the lock needs to be transiently dropped for some reason and the timing is sufficiently unfortunate, the concurrent recursive forced unmount can first unmount unionfs (dooming the unionfs vnode) and then the base FS (dooming the lower/upper vnode). The held references prevent the vnodes from being recycled (but not doomed), but even this isn't foolproof: for example, in the course of being doomed, the unionfs vnode will drop its references on the lower/upper vnodes, at which point they may become unreferenced unless additional action is taken. Whatever caller invoked the unionfs VOP will of course still hold a reference on the unionfs vnode, but this does not automatically guarantee that references will be held on the underlying vnodes for the duration of the call, due to the aforementioned scenario.

There is a misunderstanding. I'm very well aware of what you are saying, as you should know. But this is not my point, which concerns the sentence "Use of [vnode]->v_mount is unsafe in the presence of a concurrent forced unmount." in the context of the current change. The bulk of the latter is modifications of unionfs_vfsops.c, which contains VFS operations, and not vnode ones. There are no vnodes involved there, except accessing the layers' root ones. And what I'm saying, and that I proved above is that v_mount on these, again in the context of a VFS operation, cannot be NULL because of a force unmount (if you disagree, then please show where you think there is a flaw in the reasoning).

Feb 13 2024, 12:26 PM

Feb 12 2024

jah added a comment to D43815: unionfs: cache upper/lower mount objects.
In D43815#1000214, @jah wrote:

If one of the layer if forcibly unmounted, there isn't much point in continuing operation. But, given the first point above, that cannot even happen. So really the only case when v_mount can get NULL is the ZFS rollback's one (the layers' root vnodes can't be recycled since they are vrefed). Thinking more about it, always testing if these are alive and well is going to be inevitable going forward. But I'm fine with this change as it is for now.

This can indeed happen, despite the first point above. If a unionfs VOP ever temporarily drops its lock, another thread is free to stage a recursive forced unmount of both the unionfs and the base FS during this window. Moreover, it's easy for this to happen without unionfs even being aware of it: because unionfs shares its lock with the base FS, if a base FS VOP (forwarded by a unionfs VOP) needs to drop the lock temporarily (this is common e.g. for FFS operations that need to update metadata), the unionfs vnode may effectively be unlocked during that time. That last point is a particularly dangerous one; I have another pending set of changes to deal with the problems that can arise in that situation.

This is why I say it's easy to make a mistake in accessing [base vp]->v_mount at an unsafe time.

I don't think it can. Given the first point above, there can't be any unmount of some layer (even forced) until the unionfs mount on top is unmounted. As the layers' root vnodes are vrefed(), they can't become doomed (since unmount of their own FS is prevented), and consequently their v_mount is never modified (barring the ZFS rollback case). This is independent of holding (or not) any vnode lock.

Which doesn't say that they aren't any problems of the sort that you're reporting in unionfs, it's just a different matter.

Feb 12 2024, 6:34 PM
jah added a comment to D43815: unionfs: cache upper/lower mount objects.
In D43815#999937, @jah wrote:

Well, as it is today unmounting of the base FS is either recursive or it doesn't happen at all (i.e. the unmount attempt is rejected immediately because of the unionfs stacked atop the mount in question). I don't think it can work any other way, although I could see the default settings around recursive unmounts changing (maybe vfs.recursive_forced_unmount being enabled by default, or recursive unmounts even being allowed for the non-forced case as well). I don't have plans to change any of those defaults though.

I was asking because I was fearing that the unmount could proceed in the non-recursive case, but indeed it's impossible (handled by the !TAILQ_EMPTY(&mp->mnt_uppers) test in dounmount()). For the default value itself, for now I think it is fine as it is (prevents unwanted foot-shooting).

For the changes here, you're right that the first reason isn't an issue as long as the unionfs vnode is locked when the [base_vp]->v_mount access happens, as the unionfs unmount can't complete while the lock is held which then prevents the base FS from being unmounted. But it's also easy to make a mistake there, e.g. in cases where the unionfs lock is temporarily dropped, so if the base mount objects need to be cached anyway because of the ZFS case then it makes sense to just use them everywhere.

If one of the layer if forcibly unmounted, there isn't much point in continuing operation. But, given the first point above, that cannot even happen. So really the only case when v_mount can get NULL is the ZFS rollback's one (the layers' root vnodes can't be recycled since they are vrefed). Thinking more about it, always testing if these are alive and well is going to be inevitable going forward. But I'm fine with this change as it is for now.

Feb 12 2024, 4:43 PM
jah added a comment to D43818: unionfs: workaround underlying FS failing to respect cn_namelen.
In D43818#999955, @olce wrote:

OK as a workaround. Hopefully, we'll get OpenZFS fixed soon. If you don't plan to, I may try to submit a patch upstream, since it seems no one has proposed any change in https://github.com/openzfs/zfs/issues/15705.

Feb 12 2024, 12:34 AM
jah added a comment to D40850: VFS lookup: New vn_cross_single_mount() and vn_cross_mounts().

@olce @mjg This change seems to have stalled, what do you want to do about it?

Feb 12 2024, 12:32 AM

Feb 11 2024

jah added a comment to D43815: unionfs: cache upper/lower mount objects.
In D43815#999912, @olce wrote:

I think this goes in the right direction long term also.

Longer term, do you have any thoughts on only supporting recursive unmounting, regardless of whether forced or not? This would eliminate the first reason evoked in the commit message.

Feb 11 2024, 6:22 PM
jah updated the diff for D43815: unionfs: cache upper/lower mount objects.

Style

Feb 11 2024, 7:12 AM
jah updated the diff for D43818: unionfs: workaround underlying FS failing to respect cn_namelen.

Update comment

Feb 11 2024, 6:57 AM
jah requested review of D43818: unionfs: workaround underlying FS failing to respect cn_namelen.

Sadly my attempt at something less hacky didn't really improve things.

Feb 11 2024, 6:56 AM
jah added inline comments to D43817: unionfs: upgrade the vnode lock during fsync() if necessary.
Feb 11 2024, 6:39 AM
jah planned changes to D43818: unionfs: workaround underlying FS failing to respect cn_namelen.

Putting this on hold, as I'm evaluating a less-hacky approach.

Feb 11 2024, 5:28 AM

Feb 10 2024

jah added a comment to D43818: unionfs: workaround underlying FS failing to respect cn_namelen.

Also filed https://github.com/openzfs/zfs/issues/15705, as I think that would benefit OpenZFS as well.

Feb 10 2024, 4:53 PM
jah requested review of D43818: unionfs: workaround underlying FS failing to respect cn_namelen.
Feb 10 2024, 4:49 PM
jah requested review of D43817: unionfs: upgrade the vnode lock during fsync() if necessary.
Feb 10 2024, 4:39 PM
jah requested review of D43816: VFS: update VOP_FSYNC() debug check to reflect actual locking policy.
Feb 10 2024, 4:35 PM
jah requested review of D43815: unionfs: cache upper/lower mount objects.
Feb 10 2024, 4:31 PM

Jan 2 2024

jah committed rG10f2e94acc1e: vm_page_reclaim_contig(): update comment to chase recent changes (authored by jah).
vm_page_reclaim_contig(): update comment to chase recent changes
Jan 2 2024, 9:44 PM

Dec 24 2023

jah committed rG0ee1cd6da960: vm_page.h: tweak page-busied assertion macros (authored by jah).
vm_page.h: tweak page-busied assertion macros
Dec 24 2023, 5:40 AM
jah committed rG2619c5ccfe1f: Avoid waiting on physical allocations that can't possibly be satisfied (authored by jah).
Avoid waiting on physical allocations that can't possibly be satisfied
Dec 24 2023, 5:40 AM
jah closed D42706: Avoid waiting on physical allocations that can't possibly be satisfied.
Dec 24 2023, 5:40 AM

Dec 1 2023

jah updated the diff for D42706: Avoid waiting on physical allocations that can't possibly be satisfied.

Apply code review feedback from markj

Dec 1 2023, 4:33 AM

Nov 30 2023

jah added inline comments to D42706: Avoid waiting on physical allocations that can't possibly be satisfied.
Nov 30 2023, 5:30 PM

Nov 24 2023

jah updated the diff for D42706: Avoid waiting on physical allocations that can't possibly be satisfied.

Eliminate extraneous call to vm_phys_find_range()

Nov 24 2023, 5:41 AM
jah added inline comments to D42706: Avoid waiting on physical allocations that can't possibly be satisfied.
Nov 24 2023, 5:20 AM

Nov 23 2023

jah updated the diff for D42706: Avoid waiting on physical allocations that can't possibly be satisfied.

Avoid allocation in the ERANGE case, assert that return status is ENOMEM if not 0/ERANGE.

Nov 23 2023, 9:01 PM

Nov 21 2023

jah added inline comments to D42706: Avoid waiting on physical allocations that can't possibly be satisfied.
Nov 21 2023, 11:57 PM
jah requested review of D42706: Avoid waiting on physical allocations that can't possibly be satisfied.
Nov 21 2023, 11:45 PM

Nov 16 2023

jah accepted D42625: fuse copy_file_range() fixes.
Nov 16 2023, 12:42 AM

Nov 15 2023

jah added inline comments to D42625: fuse copy_file_range() fixes.
Nov 15 2023, 11:57 PM
jah accepted D42625: fuse copy_file_range() fixes.
Nov 15 2023, 11:35 PM

Nov 13 2023

jah added inline comments to D42554: vn_copy_file_range(): busy both in and out mp around call to VOP_COPY_FILE_RANGE().
Nov 13 2023, 4:57 PM
jah accepted D42554: vn_copy_file_range(): busy both in and out mp around call to VOP_COPY_FILE_RANGE().
Nov 13 2023, 4:18 PM
jah added inline comments to D42554: vn_copy_file_range(): busy both in and out mp around call to VOP_COPY_FILE_RANGE().
Nov 13 2023, 3:17 PM
jah added inline comments to D42554: vn_copy_file_range(): busy both in and out mp around call to VOP_COPY_FILE_RANGE().
Nov 13 2023, 2:38 PM

Nov 12 2023

jah committed rG66b8f5484cfe: vfs_lookup_cross_mount(): restore previous do...while loop (authored by jah).
vfs_lookup_cross_mount(): restore previous do...while loop
Nov 12 2023, 2:57 AM

Nov 4 2023

jah committed rG586fed0b0356: vfs_lookup_cross_mount(): restore previous do...while loop (authored by jah).
vfs_lookup_cross_mount(): restore previous do...while loop
Nov 4 2023, 5:16 PM

Oct 2 2023

jah added a comment to D42008: tun/tap: correct ref count on cloned cdevs.

From the original PR it also sounds as though this sort of refcounting issue is a common problem with drivers that use the clone facility? Could clone_create() be changed to automatically add the reference to an existing device, or perhaps a wrapper around clone_create() that does this automatically? Or would that merely create different complications elsewhere?

Oct 2 2023, 5:56 PM
jah added a comment to D42008: tun/tap: correct ref count on cloned cdevs.
In D42008#958212, @kib wrote:

Devfs clones is a way to handle (reserve) unit numbers. It seems that phk decided that the least involved way to code it is to just keep whole cdev with the unit number somewhere (on the clone list). These clones are not referenced, they exist by mere fact being on the clone list. When device driver allocates clone, it must make it fully correct, including the ref count.

References on cdev protect freeing of the device memory, they do not determine the lifecycle of the device. Device is created with make_dev() and destroyed with destroy_dev(), the later does not free the memory and does not even drop a reference. Devfs nodes are managed out of the driver context, by combination of dev_clone eventhandler and devfs_populate_loop() top-level code. Eventhandler is supposed to return device with additional reference to protect against parallel populate loop, and loop is the code which usually dereferences the last ref on destroyed (in destroy_dev() sense) device.

So typical driver does not need to manage dev_ref()/dev_rel() except for initial device creation, where clones and dev_clone context add some complications.

Oct 2 2023, 5:52 PM

Sep 28 2023

jah added a comment to D42008: tun/tap: correct ref count on cloned cdevs.

I've never used the clone KPIs before, so please forgive my ignorance in asking a couple of basic questions:

Sep 28 2023, 4:06 PM
jah committed rGee596061e5a5: devfs: add integrity asserts for cdevp_list (authored by jah).
devfs: add integrity asserts for cdevp_list
Sep 28 2023, 1:43 AM
jah committed rG23332e34e653: devfs: add integrity asserts for cdevp_list (authored by jah).
devfs: add integrity asserts for cdevp_list
Sep 28 2023, 1:29 AM

Sep 21 2023

jah committed rG67864268da53: devfs: add integrity asserts for cdevp_list (authored by jah).
devfs: add integrity asserts for cdevp_list
Sep 21 2023, 4:52 PM

Jul 24 2023

jah added a comment to D40883: vfs: factor out mount point traversal to a dedicated routine.
In D40883#931131, @mjg wrote:

huh, you just made me realize the committed change is buggy in that it fails to unlock dvp. i'll fix it up soon.

Jul 24 2023, 6:18 PM
jah added inline comments to D40852: Remove VV_CROSSLOCK flag, and logic in nullfs and unionfs.
Jul 24 2023, 6:18 PM

Jul 7 2023

jah added a comment to D40883: vfs: factor out mount point traversal to a dedicated routine.

Looks like a similar cleanup can be done in the needs_exclusive_leaf case at the end of vfs_lookup().

Jul 7 2023, 2:07 AM

Jul 3 2023

jah added inline comments to D40850: VFS lookup: New vn_cross_single_mount() and vn_cross_mounts().
Jul 3 2023, 11:35 PM
jah added inline comments to D40600: vfs_lookup(): remove VV_CROSSLOCK logic.
Jul 3 2023, 6:03 PM
jah added inline comments to D40850: VFS lookup: New vn_cross_single_mount() and vn_cross_mounts().
Jul 3 2023, 4:05 PM
jah added inline comments to D40600: vfs_lookup(): remove VV_CROSSLOCK logic.
Jul 3 2023, 3:41 PM