Page MenuHomeFreeBSD

Add vm_page_alloc_after()
ClosedPublic

Authored by markj on Aug 11 2017, 5:01 PM.
Tags
None
Referenced Files
Unknown Object (File)
Mar 7 2024, 11:36 PM
Unknown Object (File)
Feb 21 2024, 8:06 PM
Unknown Object (File)
Feb 11 2024, 4:39 AM
Unknown Object (File)
Feb 10 2024, 4:39 AM
Unknown Object (File)
Jan 1 2024, 8:49 PM
Unknown Object (File)
Dec 20 2023, 7:59 AM
Unknown Object (File)
Dec 12 2023, 6:45 AM
Unknown Object (File)
Dec 7 2023, 2:50 AM
Subscribers

Details

Summary

This is a KPI that behaves the same as vm_page_alloc() but allows the
caller to specify the page with largest pindex that is less than the
request pindex, thus avoiding a radix tree lookup.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

I think that the explanation from the revision' description should be added as a comment.

This revision is now accepted and ready to land.Aug 11 2017, 6:12 PM
markj edited edge metadata.
  • Add a comment above vm_page_alloc_after().
  • Have kmem_back() use vm_page_alloc_after().
This revision now requires review to proceed.Aug 11 2017, 10:09 PM
sys/vm/vm_kern.c
349 ↗(On Diff #31963)

I'm wondering if it would be preferable to have vm_page_alloc_after() attempt a lookup if mpred is NULL. Otherwise I think pretty much any vm_page_alloc_after() caller will have this pattern of calling vm_page_alloc() on the first iteration.

sys/vm/vm_kern.c
349 ↗(On Diff #31963)

Then make vm_page_alloc() a macro calling vm_page_alloc_after() with the NULL arg.

I think that the callers should be rewritten to use vm_radix_lookup_le(), e.g.,

Index: vm/vm_kern.c
===================================================================
--- vm/vm_kern.c        (revision 322403)
+++ vm/vm_kern.c        (working copy)
@@ -332,7 +332,7 @@ int
 kmem_back(vm_object_t object, vm_offset_t addr, vm_size_t size, int flags)
 {
        vm_offset_t offset, i;
-       vm_page_t m;
+       vm_page_t m, mpred;
        int pflags;
 
        KASSERT(object == kmem_object || object == kernel_object,
@@ -342,9 +342,12 @@ kmem_back(vm_object_t object, vm_offset_t addr, vm
        pflags = malloc2vm_flags(flags) | VM_ALLOC_NOBUSY | VM_ALLOC_WIRED;
 
        VM_OBJECT_WLOCK(object);
-       for (i = 0; i < size; i += PAGE_SIZE) {
+       i = 0;
 retry:
-               m = vm_page_alloc(object, atop(offset + i), pflags);
+       mpred = vm_radix_lookup_le(&object->rtree, atop(offset + i));
+       for (; i < size; i += PAGE_SIZE, mpred = m) {
+               m = vm_page_alloc_after(object, atop(offset + i), pflags,
+                   mpred);
 
                /*
                 * Ran out of space, free everything up and return. Don't need

P.S. I have a slight stylistic preference for "mpred" being the last parameter, like with vm_page_insert_after().

  • Reorder arguments to vm_page_alloc_after()
  • Use vm_radix_lookup_le() to initialize mpred
markj added inline comments.
sys/vm/vm_kern.c
349 ↗(On Diff #31963)

This is address by the change to use vm_radix_lookup_le().

sys/vm/vm_kern.c
349 ↗(On Diff #32073)

I don't think that you want the "mpred = NULL" here.

  • Eliminate bogus initialization.

I'm happy with this.

This revision is now accepted and ready to land.Aug 15 2017, 4:00 AM
This revision was automatically updated to reflect the committed changes.
bdrewery added inline comments.
head/sys/vm/vm_page.c
3221

Do we need to check mpred->pindex == m->pindex - 1 here? I'm unsure but every other TAILQ_PREV(listq) checks the pindex after lookup. Probably not since we fetched from the object's radix tree and the object is locked.

I know at least when allocating a page we may have bogus values in both tqe_next and tqe_prev from other uses like free lists that are not fixed until the page is added to the object's memq. Likewise do we need QMD_IS_TRASHED(mpred) checks in more places like vm_page_acquire_unlocked does? It's concerning how overloaded plinks and listq are.

head/sys/vm/vm_page.c
3221

We'd only check mpred->pindex == m->pindex - 1 if we wanted to know if there's a hole in the set of pages resident in the object. Suppose you're paging in a run of pages from swap - you'd use TAILQ_PREV() to find the previous resident page, and the size of the hole between the previous and current pages gives an upper bound on the size of the (read-behind) run.

Here we are looking up the preceding page in much the same way that a plain vm_page_alloc() does, except here the page at pindex + i may already resident, so we have to handle both cases.

We have QUEUE_MACRO_DEBUG_TRASH configured by default now so code requiring QMD_IS_TRASHED() should be pretty easily found.

Yes those fields are overloaded, but we really want struct vm_page to be small. In general uses of these fields have to be synchronized with allocation and free.