vm_page_alloc() just calls vm_page_alloc_after(), after it has found the predecessor of a page parameter. Many callers of vm_page_alloc() already know that predecessor. Letting them pass that to vm_page_alloc_after() directly could save a little redundant calculation.
Details
Diff Detail
- Lint
Lint Skipped - Unit
Tests Skipped
Event Timeline
sys/vm/swap_pager.c | ||
---|---|---|
1421–1423 | I don't think so. If ma[0]->pindex is 20, and mpred->pindex is 10, then we're inserting page 19 after 10, then 18 after 10, then 17 after 10, etc. | |
sys/vm/vm_page.c | ||
5088 | True. It was meant to go into my last commit, but I didn't save the file, so it didn't happen. | |
sys/vm/vnode_pager.c | ||
1060–1062 | I don't think so. |
sys/vm/swap_pager.c | ||
---|---|---|
1421–1423 |
To a casual reader, which I was at first, this looks suspect. A brief comment about why mpred doesn't change would helpful. |
I got this with D49103.151389.patch:
20250224 11:38:23 all (254/953): mmap8.sh panic: vm_reserv_from_object: msucc doesn't succeed pindex cpuid = 4 time = 1740393505 KDB: stack backtrace: db_trace_self_wrapper() at db_trace_self_wrapper+0x2b/frame 0xfffffe01087809b0 vpanic() at vpanic+0x136/frame 0xfffffe0108780ae0 panic() at panic+0x43/frame 0xfffffe0108780b40 vm_reserv_from_object() at vm_reserv_from_object+0xc9/frame 0xfffffe0108780b50 vm_reserv_alloc_page() at vm_reserv_alloc_page+0x72/frame 0xfffffe0108780bb0 vm_page_alloc_domain_after() at vm_page_alloc_domain_after+0x140/frame 0xfffffe0108780c40 vm_page_alloc_after() at vm_page_alloc_after+0x54/frame 0xfffffe0108780cb0 vm_fault_copy_entry() at vm_fault_copy_entry+0x32f/frame 0xfffffe0108780d60 vm_map_protect() at vm_map_protect+0x72b/frame 0xfffffe0108780df0 sys_mprotect() at sys_mprotect+0x9f/frame 0xfffffe0108780e00 amd64_syscall() at amd64_syscall+0x15a/frame 0xfffffe0108780f30 fast_syscall_common() at fast_syscall_common+0xf8/frame 0xfffffe0108780f30 --- syscall (74, FreeBSD ELF64, mprotect), rip = 0x822c0ed7a, rsp = 0x820d0ee78, rbp = 0x820d0eea0 --- KDB: enter: panic [ thread pid 82131 tid 102074 ] Stopped at kdb_enter+0x33: movq $0,0x104ed52(%rip) db>
It's easy to reproduce on my test box.
In vm_fault.c, correct the initial value of mpred in the case that src_object == dst_object.
sys/vm/swap_pager.c | ||
---|---|---|
1993 | In the longer term, wouldn't it be preferable to pass the iterator directly to vm_page_alloc_<mumble>(), so that the allocator can use it to insert the page into the radix trie? The whole point of the _after variant is to avoid having to look up the previous page when inserting the new one, and the reason we look up the previous page is so that we can insert the new page into the object's memq in vm_page_insert_radixdone(), but in the long term we'd like the memq to go away. |
sys/vm/swap_pager.c | ||
---|---|---|
1993 | Yes, I'd like to be passing iterators someday. I'd hoped this would be a small step forward that wouldn't alarm anyone who remains suspicious of iterators. Actually, memq is just one reason for passing mpred to _after(). Another reason is that we use the address of mpred in vm_reserv_alloc_contig() to find the reservation that we might be able to allocate from. As you have observed previously, passing an iterator to vm_reserv_alloc_contig would work as well. |
I completed a full stress2 test run with D49103.151425.patch without seeing any issues.