This patch wraps canonically protected valid bit access in inlines and leaves those that have special protection without. It slightly expands the scope of busy to ensure that all valid manipulation is done with a busy lock. In the case of a shared busy lock we use atomic updates to the valid field as we do with dirty. The following guarantees exist after this:
valid is always set with busy held.
valid is always cleared with busy and the object lock held.
valid is reliably checked against zero with either object or busy.
The first clause should be clear. Before we may not have held the busy lock but it had been guaranteed that it was not otherwise held which is logically the same as holding it.
The second clause exists because some callers like vm_map_pmap_enter() want to enter valid pages without checking the busy state. Fortunately valid is only cleared on truncate and pageout so effectively the object lock requirement is rare.