Page MenuHomeFreeBSD

[wip] vfs: decentralize device usecount
Needs ReviewPublic

Authored by mjg on Wed, Sep 4, 8:19 PM.

Details

Reviewers
kib
jeff
Summary

Currently VCHR vnodes are special-cased during each v_usecount bump in order to maintain the counter in the device. This causes an avoidable slowdown since ->v_type shares the cacheline with the count. It can be moved far enough to be less of a problem, but constant special casing would remain.

Proposed solution is simple in nature: keep an array of all vnodes associated with the device and if necessary walk it to get the total usecount.

Of course many devfs mount points can result in many vnodes using the same device, which poses a problem with possibly long walks. It is solved in two ways:

  • vnodes are almost never just held, they are either fully activated or on the free list. thus vdrop and vhold move the vnode in an out of the array as needed, effectively only keeping used vnodes in there
  • the exact count is almost never needed. instead callers mostly want to know if it is any of 0, 1 or 2. count_dev_cmp is introduced to terminate the walk as soon as it knows the result

The code is still a prototype and has rather verbose names. I used a version which maintained both the centralized count and the new one, with the counting routine comparing both. No mismatches were found during some time of poudriere (both counts were only modified under dev_lock to maintain consistency)

Note the v_rdev assignment, table manipulations and counting are all protected with dev_lock.

Also the exact value of the count is similarly racy to the original - previously an update could have been stalled on the dev_lock while the counter was being read.

Diff Detail

Repository
rS FreeBSD src repository
Lint
Lint Skipped
Unit
Unit Tests Skipped
Build Status
Buildable 26278

Event Timeline

mjg created this revision.Wed, Sep 4, 8:19 PM
mjg edited the summary of this revision. (Show Details)Wed, Sep 4, 8:20 PM
mjg edited the summary of this revision. (Show Details)Wed, Sep 4, 8:23 PM