For typical users the pointer is set to the lock in the same vnode, but it shares the cacheline. As a result it adds to cache misses from other CPUs during concurrent access and makes vop_stdlock/vop_stdunlock show up in profiles.
Details
Diff Detail
- Repository
- rS FreeBSD src repository - subversion
- Lint
Lint Skipped - Unit
Tests Skipped - Build Status
Buildable 27765
Event Timeline
sys/ufs/ufs/ufs_vnops.c | ||
---|---|---|
2743 | I found this in ufs/ffs/ffs_snapshot.c: vp->v_vnlock = &sn->sn_lock; lockmgr(&vp->v_lock, LK_RELEASE, NULL); I don't want to take chances on this one. |
sys/ufs/ufs/ufs_vnops.c | ||
---|---|---|
2743 | Yes, this is a main reason why v_vnlock was introduced at all. But I do not undersrtand why do you need vops for UFS. We only create vnodes with FFS vop vectors. Do you have an example where we instantiate a vnode with UFS vnodeops ? |
sys/ufs/ufs/ufs_vnops.c | ||
---|---|---|
2743 | Currently common ops like shared vnode locking perform avoidable memory accesses. On top of that lockmgr provides features which are not needed by neither of tmpfs, zfs, devfs and few others; while said support comes with a performance hit stemming from additional accesses. Thus I'm trying to decouple "minimal" lockmgr which still works for the first group, while providing separate entry points for the rest. As such I don't think it is worth the effort to find out if a specific vop vector in ufs code can get away with the simpler variant. |
sys/ufs/ufs/ufs_vnops.c | ||
---|---|---|
2743 | The patch adds useless lines which will confuse anybody trying to understand what is going on. |