AFAIK, there is not currently a way to easily tell if a system is
actively recycling vnodes, or how quickly it is being forced to do so.
In particular, on some workloads, constant recycling is a sign that
the system needs maxvnodes bumped (assuming there is sufficient RAM
available). I think that issue is less pressing now that maxvnodes no
longer has an artifically low cap of 100000 vnodes, but I think this
stat is still useful.
Details
- Reviewers
kib - Commits
- rS278760: Add two new counters for vnode life cycle events:
Diff Detail
- Repository
- rS FreeBSD src repository - subversion
- Lint
Lint Skipped - Unit
Tests Skipped
Event Timeline
The patch definitely not count all recycle events. It even does not count the number of recycles initiated by the vnlru_proc. Note that vnlru performs calls vnlru_free()->vtryrecycle()->vgonel() (which you count) and vlrureclaim()->vgonel(), both of which recycle a vnode.
So if your goal is to account recycles initiated by daemon or vnode shortage at the getnewvnode() time, then at least one more place, in vlrureclaim(), must be handled. If you want to account all recycles, including events such as inactivation of the unlinked vnode or unmounts, it is simpler to do it in vgonel() or even adding post handler for VOP_RECYCLE.
Which do you think is more useful? My guess is that it is probably best to be as simple as possible and just count all recycles, but recycles due to unlink or unmount might indeed by noisy.
IMO the original stated intent, count the reclamations due to vnode deficit, is more interesting than overall vnode reclamation. As far as I see, your patch needs one more atomic_add_long() in the vlrureclaim().
That said, did you considered counting the vnode creation events instead, or in addition to the reclamation due to the vnodes shortage ? It is very easy to do, the only vnode constructor is getnewvnode(9). The graphs of numvnodes + getnewvnode calls counter would probably provide the same information.
sys/kern/vfs_subr.c | ||
---|---|---|
164 ↗ | (On Diff #3508) | Is this mib RW on purpose ? |
1000 ↗ | (On Diff #3508) | I think we must only count when vgonel() call is performed, i.e. under the if() branch. |
1082 ↗ | (On Diff #3508) | Althought this is currently not enabled, getnewvnode() was designed to allow fail. |
sys/kern/vfs_subr.c | ||
---|---|---|
164 ↗ | (On Diff #3508) | Eh, hmm. Some stat nodes are RW to allow them to be zero'd, but it looks like most of the VFS ones are read-only (though reassignbufcalls is RW). I'll switch it to read-only. |
1000 ↗ | (On Diff #3508) | Agreed, I almost made that change but talked myself out of it. |
1082 ↗ | (On Diff #3508) | Done. I've updated the counter's description to note that it only counts calls that succeed as well. Not sure if the name should also be changed from 'getnewvnode_calls' to 'getnewvnode_<mumble>' as well? (Can't think of a decent <mumble> off the top of my head) Maybe 'vnodes_allocated'? |
sys/kern/vfs_subr.c | ||
---|---|---|
126 ↗ | (On Diff #3667) | Might be , vnodes_created ? |
sys/kern/vfs_subr.c | ||
---|---|---|
126 ↗ | (On Diff #3667) | Ok. |