Page MenuHomeFreeBSD

vm_page_grab*: Consolidate common logic into vm_page_grab_release()
Needs ReviewPublic

Authored by markj on Feb 16 2021, 10:25 PM.



This is from Jeff's object_concurrency branch ("Normalize VM_ALLOC_ZERO
handling in page busy routines.").

The aim here is to make the vm_page grab routines a bit less sprawling.
In particular:

  • vm_page_grab_release() now handles VM_ALLOC_WIRE
  • vm_page_grab_release() now handles downgrading of busy state
  • vm_page_grab_release() now handles page zeroing

Diff Detail

rS FreeBSD src repository - subversion
Lint OK
No Unit Test Coverage
Build Status
Buildable 37072
Build 33961: arc lint + arc unit

Event Timeline


This isn't ideal: it is cheaper to wire a page in the page allocator than here. In particular, vm_page_wire() has to use an atomic fetchadd to update the wire count, whereas the page allocator can modify the ref_count field without atomics.


This update is unsynchronized. The flags field is supposed to modified only at page allocation or free time, when only the current thread holds a reference to the page.

I'll see if I can refactor the code to avoid this without making a mess, but I believe this is also harmless for now at least.


Maybe elaborate on this comment a bit? Maybe something like "vm_page_grab* functions will ensure the page is zero-filled and fully valid if VM_ALLOC_ZERO is passed. vm_page_alloc* functions will only make a best effort to allocate already zero-filled pages if VM_ALLOC_ZERO is passed."


I actually aimed to change this semantic in , just haven't had time to revise the diff accordingly. My plan is to rebase this diff on top of that one. But indeed, we should update this comment to note that vm_page_grab* returns zero-filled pages if so requested.