Page MenuHomeFreeBSD

2/2 lockmgr: provide routines usable as direct vfs calls
Needs ReviewPublic

Authored by mjg on Fri, Feb 14, 11:34 AM.

Details

Reviewers
kib
jeff
Summary

This avoids a level if indirection for these incredibly common operations.

Combined with D23683 filesystems which use these end up going from _vn_lock directly to the locking routine, effectively avoiding 2 levels of jumps through to a different function.

See the previous review for sample result (including other changes not covered by either review).

Note that lockmgr is basically a vfs and buffer cache lock thus I don't consider this to be a layering violation.

Diff Detail

Repository
rS FreeBSD src repository
Lint
Lint Skipped
Unit
Unit Tests Skipped
Build Status
Buildable 29374

Event Timeline

mjg created this revision.Fri, Feb 14, 11:34 AM
mjg added inline comments.Fri, Feb 14, 11:35 AM
sys/kern/kern_lock.c
1231

the compiler inlines these here (same with islocked below), but this can be changed to not depend on it later

jeff added a comment.Sun, Feb 16, 9:54 AM

lockstatus should also be an inline in the header.

sys/kern/kern_lock.c
1229

We would get the same amount of stack reduction while keeping the layering intact if the fast path of lockmgr_slock was in the header consistent with other types.

I think you could do one BETTER by turning VOP_LOCK() into a macro that does:
if (__builtin_const_p(flags) && flags == LK_SHARED)

VOP_SLOCK();

...
else VOP_LOCK_FLAGS() or whatever.

mjg added a comment.Sun, Feb 16, 1:10 PM
In D23684#520596, @jeff wrote:

lockstatus should also be an inline in the header.

Note this does not change anything in terms of code generation for the current patch. A function call has to be made anyway and lockmgr_vop_islocked inlines the routine. As for the layering stuff see other comments.

sys/kern/kern_lock.c
1229

I want to do _SLOCK/_XLOCK and _SUNLOCK/_XUNLOCK in addition to the current _LOCK/_UNLOCK but not precisely this way.

The argument from lining this up with other locking primitives does not work in my opinion. lockmgr already has certain level of explicit vnode awareness, any change in behavior (like adding adaptive spinning) de facto requires separate entry points to avoid additional accesses to the cacheline all while not enabling the behavior for buffer cache; but most importantly there is a vgone vs concurrent vop_lock request bug which in my opinion is fixed in the cleanest way by require vops themselves to provide the current vn_lock semantics. In fact I was thinking about putting this review on hold and taking care of vgone, which would also give clearer picture what makes sense to do here.

mjg added inline comments.Sun, Feb 16, 1:15 PM
sys/kern/kern_lock.c
1229

I'm going to open a review for this by tomorrow.

jeff added inline comments.Sun, Feb 16, 7:28 PM
sys/kern/kern_lock.c
1229

We should really identify how and when this was broken because it explicitly worked. VOP_LOCK could always return an error on identity change and vn_lock() checked for it and looped in LK_RETRY. This made the whole thing safe with stacking and so you didn't touch anything when you woke up in deadfs.