Otherwise we are susceptible to a race with a concurrent vm_map_wire()
call, which faults pages into the object chain and drops the map lock
between each call to vm_fault(). In particular, vm_map_protect() will
only copy newly writable wired pages if MAP_ENTRY_USER_WIRED is set, but
vm_map_wire() only sets this flag after the fault loop. So we may end up
with a PROT_WRITE wired map entry whose top-level object does not
contain the entire range of pages.
It is also possible to address this race by checking for in-transition
map entries in the final loop of vm_map_protect(), but that is more
complicated since it requires some restart logic, and I don't see any
real benefits.
Unrelated changes are to add assertions to the map clip functions
(motivated by a bug in my first attempt to check for and handle
MAP_ENTRY_IN_TRANSITION entries in vm_map_protect()), and to
consistently use for-loops in vm_map_protect().