Page MenuHomeFreeBSD

Update the checks in vm_page_zone_import().
ClosedPublic

Authored by markj on Fri, Nov 15, 11:46 PM.

Details

Summary
  • Remove the cnt == 1 check. UMA passes cnt == 1 when it has disabled per-CPU caching. In this case we might as well just allocate a single page and return it to the caller, since the caller is going to do exactly that anyway if the UMA cache allocation attempt fails.
  • Don't replenish caches if the domain is severely short on free pages. With large buckets we may otherwise quickly exacerbate a situation where the page daemon is failing to keep up.
  • Don't replenish caches if the calling thread belongs to the page daemon, which should avoid creating extra memory pressure when it is trying to free memory.

Diff Detail

Repository
rS FreeBSD src repository
Lint
Automatic diff as part of commit; lint not applicable.
Unit
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

markj created this revision.Fri, Nov 15, 11:46 PM
dougm added inline comments.Sat, Nov 16, 6:21 AM
sys/vm/vm_page.c
2330 ↗(On Diff #64410)

You've broken v_domain_allocate apart to save some work for this call. But you could have broken it up at a different point, so that you were passing parameter limit=vmd->vmd_free_reserved, instead of req=VM_ALLOC_NORMAL.

kib accepted this revision.Sat, Nov 16, 10:17 AM
kib added inline comments.
sys/vm/vm_page.c
2330 ↗(On Diff #64410)

In fact, this put both pageout and laundry threads under the policy. Should we add swapout as well ?

This revision is now accepted and ready to land.Sat, Nov 16, 10:17 AM
markj added inline comments.Sat, Nov 16, 5:42 PM
sys/vm/vm_page.c
2330 ↗(On Diff #64410)

Is there much benefit to breaking it the way you suggested? I find it less clear: VM_ALLOC_NORMAL/SYSTEM/INTERRUPT are used in any code that calls the page allocator, so the use of VM_ALLOC_NORMAL in zone_import "obviously" indicates that the import will fail during a severe page shortage. The specific limits are internal to the allocator, so passing one as a parameter here would probably force me to go look up its definition.

2330 ↗(On Diff #64410)

Indeed. Most of the allocations will be done by the laundry thread: if there is a free page shortage, it is likely that various swap-related UMA caches will be drained, so the laundry thread will be forced to allocate new slabs. In fact I do not think that the main page daemon thread will ever allocate memory. For laundering this check is actually insufficient since swap pageouts usually require work by other threads, in GEOM and CAM for instance.

I could add the swapout daemon but I do not see a situation where that would help anything: recall that vm_thread_swapout() simply unwires kernel stack pages and puts them in the laundry queue.

alc added inline comments.Sat, Nov 16, 7:04 PM
sys/vm/vm_page.c
1833 ↗(On Diff #64410)

For clarity, I would change "req" to "req_class", since this parameter must only be a class.

1874 ↗(On Diff #64410)

req_class = req & VM_ALLOC_CLASS_MASK;

jeff added a comment.Sun, Nov 17, 12:08 AM

We run all of our paging threads constantly now. I would prefer not to disable creating cache buckets from pageproc unless we're in low memory situation. Even then it may be preferable to simply flush the buckets at the end of paging in that case.

markj added a comment.Mon, Nov 18, 7:02 PM
In D22394#490201, @jeff wrote:

We run all of our paging threads constantly now. I would prefer not to disable creating cache buckets from pageproc unless we're in low memory situation. Even then it may be preferable to simply flush the buckets at the end of paging in that case.

The page daemon threads don't regularly allocate pages as far as I know. Maybe infrequently when unmapping a page will cause a superpage demotion and the leaf PTP is not cached in the pmap, and I believe that will only happen when the mapping was created with pmap_enter(psind==1). I have a hard time believing that this can happen frequently enough for per-CPU page caching to be important. This change is for the benefit of the laundry threads, which may have to allocate slabs for various UMA zones. Outside of low memory scenarios UMA caching will make slab allocations rare.

alc added a comment.Mon, Nov 18, 7:33 PM
In D22394#490201, @jeff wrote:

We run all of our paging threads constantly now. I would prefer not to disable creating cache buckets from pageproc unless we're in low memory situation. Even then it may be preferable to simply flush the buckets at the end of paging in that case.

The page daemon threads don't regularly allocate pages as far as I know. Maybe infrequently when unmapping a page will cause a superpage demotion and the leaf PTP is not cached in the pmap, and I believe that will only happen when the mapping was created with pmap_enter(psind==1).

It can also happen when the superpage mapping was created by pmap_enter_object(), in other words, a prefaulted superpage mapping created by execve() or mmap(). Like with pmap_enter(psind==1), I agree that this is going to be a rare event.

markj updated this revision to Diff 64540.Mon, Nov 18, 8:04 PM
markj marked 2 inline comments as done.

Handle Alan's notes.

This revision now requires review to proceed.Mon, Nov 18, 8:04 PM
This revision was not accepted when it landed; it landed in state Needs Review.Fri, Nov 22, 4:31 PM
This revision was automatically updated to reflect the committed changes.