Page MenuHomeFreeBSD

vm fault: Adapt to new VM_ALLOC_ZERO semantics
AcceptedPublic

Authored by markj on Feb 19 2021, 8:20 PM.
Tags
None
Referenced Files
Unknown Object (File)
Fri, Mar 29, 12:15 PM
Unknown Object (File)
Feb 16 2024, 5:28 PM
Unknown Object (File)
Feb 3 2024, 2:59 PM
Unknown Object (File)
Jan 20 2024, 10:49 AM
Unknown Object (File)
Dec 23 2023, 12:36 AM
Unknown Object (File)
Dec 11 2023, 4:44 AM
Unknown Object (File)
Sep 12 2023, 8:06 PM
Unknown Object (File)
Sep 7 2023, 2:34 AM
Subscribers

Details

Reviewers
alc
kib
jeff
Summary

This removes the optimization which avoids zero-filling a pre-zeroed
page. On systems with a direct map I believe this optimization is
nearly useless: the main source of pre-zeroed pages is the pmap, which
frees pages to VM_FREEPOOL_DIRECT, but fault pages come from
VM_FREEPOOL_DEFAULT, so in a steady state the fault handler will not
pick up pre-zeroed pages anyway. This is reflected in the v_ozfod and
v_zfod counters: on amd64 systems the ratio of these values is very
small, typically much less than 1%.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 37204
Build 34093: arc lint + arc unit

Event Timeline

markj requested review of this revision.Feb 19 2021, 8:20 PM

Still, you might add a vm_page_alloc() flag that would ask to not clear PG_ZERO on return, making it the duty of the caller. Then vm_fault() could utilize it to preserve the optimization. Could it be useful for 32bit machines?

This revision is now accepted and ready to land.Feb 19 2021, 9:22 PM
In D28807#644985, @kib wrote:

Still, you might add a vm_page_alloc() flag that would ask to not clear PG_ZERO on return, making it the duty of the caller. Then vm_fault() could utilize it to preserve the optimization. Could it be useful for 32bit machines?

I considered it and was looking for i386 systems in the cluster so I can check the v_zfod and v_ozfod counter values. I couldn't find any though, so I will look at a VM soon and try some simple loads to see if it is worth preserving. My suspicion is that it is still a minor optimization even on 32-bit systems since we are relying on the pmap to provide pre-zeroed pages, and it will not provide very many relative to typical application usage. Pages allocated from superpage reservations are unlikely to be pre-zeroed. Finally, in principle the page will be warm in the data caches if it is zeroed on demand, while with a pre-zeroed page this is less likely.

In D28807#644985, @kib wrote:

Still, you might add a vm_page_alloc() flag that would ask to not clear PG_ZERO on return, making it the duty of the caller. Then vm_fault() could utilize it to preserve the optimization. Could it be useful for 32bit machines?

I considered it and was looking for i386 systems in the cluster so I can check the v_zfod and v_ozfod counter values. I couldn't find any though, so I will look at a VM soon and try some simple loads to see if it is worth preserving. My suspicion is that it is still a minor optimization even on 32-bit systems since we are relying on the pmap to provide pre-zeroed pages, and it will not provide very many relative to typical application usage. Pages allocated from superpage reservations are unlikely to be pre-zeroed. Finally, in principle the page will be warm in the data caches if it is zeroed on demand, while with a pre-zeroed page this is less likely.

I've looked at the "vmstat -s" output from an i386 machine in the past 9 months and the results no different.

If we wanted to make this work a little better, we could have two cache zones per pool, one for prezeroed pages and one for the rest. Right now, it's a matter of luck whether you get a prezeroed page, and some of the few that we have are being returned to callers who don't want one. This would probably work best for amd64 where there a distinct direct map pool that is going to be filled by the pmap returning page table pages.