This is a scheme to avoid taking the bufobj lock and doing a second
lookup in the case where in getblk we do an unlocked lookup and find no
buf. Was there really no buf, or were we in the middle of a reassignbuf
race? By tracking any use of reassignbuf with a flag, we can know if
there can't have been a race because there has been no reassignbuf.
Because this scheme is spoiled on the first use of reassignbuf, it is
mostly only beneficial for cases where a certain vnode is never expected
to use dirty bufs at all.
Sponsored by: Dell EMC Isilon
I hesitated on submitting this patch because it appears that in-tree we
only have a couple uses of GB_NOCREAT, and they probably won't benefit
from this, and we may slightly penalize everyone else with the branch
and atomic in reassignbuf. So I understand if we don't see a benefit to
having this in-tree. We have an out-of-tree use of getblk that involves
GB_NOCREAT on vnodes that don't use dirty bufs.
Likewise, there are more sophisticated ways to resolve the unlocked
lookup race, like an atomic reassignbuf counter or a scheme like
vm_object_mightbedirty_ uses. I avoided those because neither the
in-tree uses nor our out-of-tree use would benefit.