Page MenuHomeFreeBSD

vfs: distribute freevnodes counter per-cpu
ClosedPublic

Authored by mjg on Jan 17 2020, 3:00 PM.
Tags
None
Referenced Files
F86818123: D23235.id66934.diff
Wed, Jun 26, 1:17 AM
F86818035: D23235.id66913.diff
Wed, Jun 26, 1:15 AM
Unknown Object (File)
Sat, Jun 22, 1:52 AM
Unknown Object (File)
Sat, Jun 22, 1:47 AM
Unknown Object (File)
Fri, Jun 21, 10:53 AM
Unknown Object (File)
Fri, Jun 21, 1:32 AM
Unknown Object (File)
Fri, Jun 21, 12:37 AM
Unknown Object (File)
Fri, Jun 21, 12:31 AM
Subscribers

Details

Summary

It gets rolled up to the global when deferred requeueing is performed. A dedicated read routine makes sure to return a value only off by a certain amount.

This soothes a global serialisation point for all 0<->1 hold count transitions.

Test Plan

In my tests with kern.maxvnodes=5000 vnode reclamation stats were almost identical. On a bigger scale workload this gives the following after almost 9h:

vfs.numvnodes: 5209396
vfs.freevnodes: 4790184
vfs.recycles: 93
vfs.recycles_free: 1019102
vfs.alloc_sleeps: 0
vfs.freevnode_fetches: 1800

that is, the per-cpu walk was almost never done despite reclamations being present

Diff Detail

Lint
Lint Skipped
Unit
Tests Skipped

Event Timeline

sys/kern/vfs_subr.c
3238

In principle this is wrong since a sufficiently nasty compiler could split this into a load, decrement and write back. then getting preempted by a thread which ends up doing vdbatch_process will end up in miscalculation -- whatever alteration was made is now lost. I don't know if we care. If we do, the simplest thing to do is critical_enter/exit dance.

  • don't depend on the compiler, just critical enter/exit for safety
sys/kern/vfs_subr.c
1305

Please use a define for magic numbers.

  • move slop to a macro
  • add a comment explaining the read func
This revision is now accepted and ready to land.Jan 17 2020, 11:29 PM
This revision was automatically updated to reflect the committed changes.