Page MenuHomeFreeBSD

vm fault: Adapt to new VM_ALLOC_ZERO semantics
AcceptedPublic

Authored by markj on Feb 19 2021, 8:20 PM.
Tags
None
Referenced Files
Unknown Object (File)
Thu, Oct 17, 2:01 PM
Unknown Object (File)
Oct 2 2024, 11:51 AM
Unknown Object (File)
Sep 30 2024, 5:07 PM
Unknown Object (File)
Sep 30 2024, 1:12 PM
Unknown Object (File)
Sep 28 2024, 3:29 AM
Unknown Object (File)
Sep 27 2024, 3:31 PM
Unknown Object (File)
Sep 26 2024, 5:28 PM
Unknown Object (File)
Sep 11 2024, 4:18 PM
Subscribers

Details

Reviewers
alc
kib
jeff
Summary

This removes the optimization which avoids zero-filling a pre-zeroed
page. On systems with a direct map I believe this optimization is
nearly useless: the main source of pre-zeroed pages is the pmap, which
frees pages to VM_FREEPOOL_DIRECT, but fault pages come from
VM_FREEPOOL_DEFAULT, so in a steady state the fault handler will not
pick up pre-zeroed pages anyway. This is reflected in the v_ozfod and
v_zfod counters: on amd64 systems the ratio of these values is very
small, typically much less than 1%.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 37204
Build 34093: arc lint + arc unit

Event Timeline

markj requested review of this revision.Feb 19 2021, 8:20 PM

Still, you might add a vm_page_alloc() flag that would ask to not clear PG_ZERO on return, making it the duty of the caller. Then vm_fault() could utilize it to preserve the optimization. Could it be useful for 32bit machines?

This revision is now accepted and ready to land.Feb 19 2021, 9:22 PM
In D28807#644985, @kib wrote:

Still, you might add a vm_page_alloc() flag that would ask to not clear PG_ZERO on return, making it the duty of the caller. Then vm_fault() could utilize it to preserve the optimization. Could it be useful for 32bit machines?

I considered it and was looking for i386 systems in the cluster so I can check the v_zfod and v_ozfod counter values. I couldn't find any though, so I will look at a VM soon and try some simple loads to see if it is worth preserving. My suspicion is that it is still a minor optimization even on 32-bit systems since we are relying on the pmap to provide pre-zeroed pages, and it will not provide very many relative to typical application usage. Pages allocated from superpage reservations are unlikely to be pre-zeroed. Finally, in principle the page will be warm in the data caches if it is zeroed on demand, while with a pre-zeroed page this is less likely.

In D28807#644985, @kib wrote:

Still, you might add a vm_page_alloc() flag that would ask to not clear PG_ZERO on return, making it the duty of the caller. Then vm_fault() could utilize it to preserve the optimization. Could it be useful for 32bit machines?

I considered it and was looking for i386 systems in the cluster so I can check the v_zfod and v_ozfod counter values. I couldn't find any though, so I will look at a VM soon and try some simple loads to see if it is worth preserving. My suspicion is that it is still a minor optimization even on 32-bit systems since we are relying on the pmap to provide pre-zeroed pages, and it will not provide very many relative to typical application usage. Pages allocated from superpage reservations are unlikely to be pre-zeroed. Finally, in principle the page will be warm in the data caches if it is zeroed on demand, while with a pre-zeroed page this is less likely.

I've looked at the "vmstat -s" output from an i386 machine in the past 9 months and the results no different.

If we wanted to make this work a little better, we could have two cache zones per pool, one for prezeroed pages and one for the rest. Right now, it's a matter of luck whether you get a prezeroed page, and some of the few that we have are being returned to callers who don't want one. This would probably work best for amd64 where there a distinct direct map pool that is going to be filled by the pmap returning page table pages.