The patch can be split, but it is a little bit iffy to do it.
The current logic to handle numvnodes is partially duplicated and different depending on whether getnewvnode itself or the _reserv variant is called. Unify everything into vn_alloc.
numvnodes does not guarantee UMA has memory to allocate a new vnode, meaning the current _reserv routine can still block much later (and possibly in a way which prevents some vnodes from getting freed). Since the only consumer passes the value of 1 and it never nests reservations, change the code to simply prealloc a vnode.
Finally, the vnode limit is already not strictly maintained (and it arguably should not be). The current behavior is that if immediate numvnodes bump fails, the code may try to either recycle a free vnode or wait for vnlru to free something up.
1. vnlru_free_locked makes only one attempt which may fail -- it can return without freeing up anything and numvnodes is incremented anyway.
2. getnewvnode_wait has a sleep up to 1s and another vnlru_free_locked call followed by once more incrementing numvnodes regardless of the outcome
Especially 2 is highly problematic in that it inserts stalls when the total count is close to the limit, even when UMA has memory to accommodate the request.
Also note that since the routine was guaranteed to not fail for years now making it able to fail now becomes quite problematic.
Even as it is vnlru_proc is expected to bring total counts back to the preconfigured ranges.
Thus until this gets further reworked, I think adding a uma_zalloc(..., M_NOWAIT) cal before messing with numvnodes is the right thing to do. The total vnode count is already indirectly restricted by a sum of several factors. The new added alloc adds more slop for vnlru proc to sort out but also eliminates avoidable stalls, other than that sticks to the current behavior.
This also eliminates vnlru_list acquire form this codepath during poudriere -j 104 and enables both numvnodes and freevnodes to be moved to a per-cpu scheme.
tl;dr this really needs to be reworked further but the above patch mostly moves the problem out of the way.