HomeFreeBSD

vfs: further speed up continuous free vnode recycle

Description

vfs: further speed up continuous free vnode recycle

The primary bottleneck *was* vnode_list mtx, which got artificially
worsened due to the following work done with the lock held:

  1. the global heavily modified numvnodes counter was being read, inducing massive cache line ping pong
  2. should the value fit limits (which it normally did) there would be an avoidable write to vn_alloc_cyclecount, which is being read outside of the lock, once more inducing traffic

But if vn_alloc_cyclecount is 0, which it normally is even when facing
vnode shortage, there is no need to check numvnodes nor set it to 0 again.

Another problem was numvnodes adjustment (which made the locked read
much worse). While it fundamentally does not scale as it is not
distributed in any fashion, it was avoidably slow. When bumping over the
vnode limit, it would be modified with atomics 3 times: inc + dec to
backpedal in vn_alloc, then final inc in vn_alloc_hard.

One can let some slop persist over calls to vnlru_free instead.

In principle each thread in the system could get here and bump it, so a
limit is put in place to keep things sane.

Bench setup same as in prior commits: zfs, 20 separate directory trees
each with 1 million files in total and 20 find(1) processes stating them
in parallel (one per each tree).

Total run time (in seconds) goes down as follows:
vnode limit 8388608 400000
before ~20 ~35
after ~8 ~15

With this in place the primary bottleneck is now ZFS.

Sponsored by: Rubicon Communications, LLC ("Netgate")

(cherry picked from commit 054f45e026d898bdc8f974d33dd748937dee1d6b)

Details

Provenance
mjgAuthored on Oct 11 2023, 9:42 AM
Parents
rGe47ede47277c: vfs: don't recycle transiently excess vnodes
Branches
Unknown
Tags
Unknown