Page MenuHomeFreeBSD

vfs: distribute freevnodes counter per-cpu
ClosedPublic

Authored by mjg on Jan 17 2020, 3:00 PM.
Tags
None
Referenced Files
Unknown Object (File)
Dec 17 2024, 7:51 AM
Unknown Object (File)
Dec 17 2024, 7:50 AM
Unknown Object (File)
Dec 17 2024, 7:48 AM
Unknown Object (File)
Dec 14 2024, 4:09 PM
Unknown Object (File)
Dec 13 2024, 8:41 AM
Unknown Object (File)
Dec 10 2024, 3:39 PM
Unknown Object (File)
Nov 29 2024, 9:53 PM
Unknown Object (File)
Nov 11 2024, 2:53 AM
Subscribers

Details

Summary

It gets rolled up to the global when deferred requeueing is performed. A dedicated read routine makes sure to return a value only off by a certain amount.

This soothes a global serialisation point for all 0<->1 hold count transitions.

Test Plan

In my tests with kern.maxvnodes=5000 vnode reclamation stats were almost identical. On a bigger scale workload this gives the following after almost 9h:

vfs.numvnodes: 5209396
vfs.freevnodes: 4790184
vfs.recycles: 93
vfs.recycles_free: 1019102
vfs.alloc_sleeps: 0
vfs.freevnode_fetches: 1800

that is, the per-cpu walk was almost never done despite reclamations being present

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

sys/kern/vfs_subr.c
3238 ↗(On Diff #66913)

In principle this is wrong since a sufficiently nasty compiler could split this into a load, decrement and write back. then getting preempted by a thread which ends up doing vdbatch_process will end up in miscalculation -- whatever alteration was made is now lost. I don't know if we care. If we do, the simplest thing to do is critical_enter/exit dance.

  • don't depend on the compiler, just critical enter/exit for safety
sys/kern/vfs_subr.c
1305 ↗(On Diff #66934)

Please use a define for magic numbers.

  • move slop to a macro
  • add a comment explaining the read func
This revision is now accepted and ready to land.Jan 17 2020, 11:29 PM
This revision was automatically updated to reflect the committed changes.