Page MenuHomeFreeBSD

Eliminate redundant code by introducing a vm_page_grab_valid()
AbandonedPublic

Authored by jeff on Aug 20 2019, 12:47 AM.
Tags
None
Referenced Files
Unknown Object (File)
Dec 20 2023, 3:00 AM
Unknown Object (File)
Nov 23 2023, 5:02 PM
Unknown Object (File)
Nov 22 2023, 1:54 PM
Unknown Object (File)
Nov 12 2023, 9:55 AM
Unknown Object (File)
Sep 3 2023, 1:00 AM
Unknown Object (File)
Aug 2 2023, 6:17 PM
Unknown Object (File)
Aug 1 2023, 4:33 AM
Unknown Object (File)
Jul 15 2023, 10:15 PM
Subscribers

Details

Reviewers
alc
markj
kib
dougm
Summary

This is part of a series of patches intended to move busying out of consumers as much as possible.

There are several implementations of a function to lookup a page and call the pager if it is not yet valid. These functions sometimes differed in minor details. I have done my best to ensure that the differences were not important but I may have missed something. In particular, one consumer undirties the resulting page and would zero out valid memory from a fragment but I do not think this is correct.

Some call the pager has_page function and some do not. I think we don't ever want to call has_page for vnode objects. It is expensive and the vnode pagers will fill in zeros for sparse files anyway. The one difference here would be calling vm_pager_grab_valid() with an index beyond the end of the file. Today that would quietly zero fill a page. If we skip the has_page call the pager will return an error.

I also re-implemented part of vm_page_grab() here so that I could deal more cleanly with the busy acquire requirements as they relate to the valid field. This function is compatible with the end-goal of my patchset which is to allow busy/valid/dirty to proceed without the object lock.

I am looking for a little guidance on the details before I commit. In particular has_pages and dirty.

Diff Detail

Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 25968
Build 24524: arc lint + arc unit

Event Timeline

jeff retitled this revision from Eliminate redundant code by introducing a vm_page_grab_valid that will use the pager to bring in pages that are not valid. to Eliminate redundant code by introducing a vm_page_grab_valid().Aug 20 2019, 1:01 AM
jeff edited the summary of this revision. (Show Details)
jeff added reviewers: alc, markj, kib, dougm.

has_page() must be used for swap pager.

For vnodes and generic BMAP() getpages, I remember that has_page() call is required, this was changed by getpages() commit by glebius. At least it is required for multi-page getpages(), but seems that single-page would be fine, as used in your patch. There is no such issue with buffer pager.

sys/kern/uipc_shm.c
196

Since you are changing the line, idx needs uintmax_t cast.

This revision is now accepted and ready to land.Aug 20 2019, 10:17 AM
sys/kern/uipc_shm.c
199–201

Why not replace this with VM_ALLOC_WIRED?

sys/vm/vm_glue.c
225

There should not be a space after the cast.

sys/vm/vm_page.c
3963

This should be removed. A dirty bit is never set unless the correspond valid bit is set. So, there is no need to clear the dirty bit corresponding to previously unset valid bits. And, hypothetically, if dirty bits were set corresponding to previously set valid bits, this would inappropriately clear them.

So it is ok to skip has_pages for vnode objects? This is a slight optimization that seems worthwhile to me.

sys/kern/uipc_shm.c
199–201

https://github.com/freebsd/freebsd/commit/21b376e179c036c9127f4a6cd977a5f60376ac61

It should be in this instance. But in a future patch, after I can protect valid/dirty with sbusy, I eliminate the wiring entirely.

sys/vm/vm_page.c
3963

Thank you. This is what I thought but I found some consumers doing things that looked wrong and kept it with the XXX to be sure.

In D21331#464290, @jeff wrote:

So it is ok to skip has_pages for vnode objects? This is a slight optimization that seems worthwhile to me.

I believe yes as far as page count == 1.

sys/vm/vm_page.c
3963

One more note; the invariant that dirty is only set for valid bits is not held true 100%. I discovered this while doing my busy work and Mark hit it with syzkaller.

The basic issue is that we call vm_page_dirty() outside of pmap without any locks held. This happens in vm_fault_quick_hold_pages() for example. pmap_extract_and_hold() returns a referenced page but no object lock. The instant it returns it could be partially invalidated. Then we call vm_page_dirty() and the bits are out of sync.

sys/vm/vm_page.c
3963

I believe we discussed this several days ago. If the page is invalidated, then it must be already unmapped. In this case the result of vm_fault_quick_hold_pages() is used for io which would be discarded when the page is unwired. The only real problem would be to reuse the page before io is finished, but wiring prevents this.

It is user self-inflicting race because userspace requested the io which did vm_fault_quick_hold_pages, and simultaneously unmapped the page.