User Details
- User Since
- Dec 14 2014, 5:52 AM (550 w, 6 d)
Yesterday
Rebase. Add lockp KASSERT.
I am rather concerned that the pathological case of having to walk up to the root and then back down will be common place. For example, consider a memory mapped file that is read sequentially. The first access, when the file is not yet memory resident, will leave the cursor at the end. Subsequent accesses well then have to walk all the way up, and all the way down to get to the first page.
Fri, Jul 4
Thu, Jul 3
Tue, Jul 1
Mon, Jun 30
dougm@ has been running stress on a Ryzen processor for more than 24 hours, and seen no ill effects.
Sat, Jun 28
Fri, Jun 27
Thu, Jun 26
Wed, Jun 25
Tue, Jun 24
Mon, Jun 23
A more direct approach would be to change pmap_demote_pde_locked() to handle wired mappings when the PDE was never accessed:
diff --git a/sys/amd64/amd64/pmap.c b/sys/amd64/amd64/pmap.c index 6d1c2d70d8c0..97ff9c67e8d5 100644 --- a/sys/amd64/amd64/pmap.c +++ b/sys/amd64/amd64/pmap.c @@ -6104,9 +6104,7 @@ pmap_demote_pde_locked(pmap_t pmap, pd_entry_t *pde, vm_offset_t va, * Invalidate the 2MB page mapping and return "failure" if the * mapping was never accessed. */ - if ((oldpde & PG_A) == 0) { - KASSERT((oldpde & PG_W) == 0, - ("pmap_demote_pde: a wired mapping is missing PG_A")); + if ((oldpde & (PG_W | PG_A)) == 0) { pmap_demote_pde_abort(pmap, va, pde, oldpde, lockp); return (false); } @@ -6164,7 +6162,7 @@ pmap_demote_pde_locked(pmap_t pmap, pd_entry_t *pde, vm_offset_t va, * have PG_A set in every PTE, then fill it. The new PTEs will all * have PG_A set. */ - if (!vm_page_all_valid(mpte)) + if (vm_page_all_valid(mpte) ^ (oldpde & PG_A) != 0) pmap_fill_ptp(firstpte, newpte);
Fri, Jun 20
Introduce VM_ALLOC_COMMON.
Thu, Jun 19
Wed, Jun 18
Tue, Jun 17
Mon, Jun 16
Sun, Jun 15
You should add an entry to ObsoleteFiles.inc.
Sat, Jun 14
Fri, Jun 13
Thu, Jun 12
Wed, Jun 11
Tue, Jun 10
Use busy style synchronization in linux emulation.
Mon, Jun 9
Sun, Jun 8
Sat, Jun 7
Fri, Jun 6
Jun 5 2025
@kib Do you have any comments?
Should I bump __FreeBSD_version after this change?
Jun 4 2025
May 31 2025
May 30 2025
May 28 2025
Update comments.
May 26 2025
May 25 2025
If I dramatically reduce the physical memory on the machine, so that reservations are rarely available, then even fewer calls to vm_freelist_add() have to perform a dequeue from a paging queue. the ratio is now one out of sixty:
debug.counters.pending: 15401029 debug.counters.calls: 918326383 debug.counters.not_queued: 914618225 debug.counters.dequeues: 15401153
I suspect that by the time UMA performs vm_page_zone_release(), any pending dequeues on the pages have completed.
To better understand the locking behavior, i.e., when a page queue lock is acquired while a free queues lock is held, I applied the following changes:
diff --git a/sys/vm/vm_phys.c b/sys/vm/vm_phys.c index 9261e52705fe..2c8eb510dbf5 100644 --- a/sys/vm/vm_phys.c +++ b/sys/vm/vm_phys.c @@ -389,12 +389,26 @@ sysctl_vm_phys_locality(SYSCTL_HANDLER_ARGS) } #endif
May 24 2025
I speculate that the main source of additional queue_nops is partially populated reservations that only had a small number of populated pages and so the popcount reached zero and the reservation was returned to the buddy queues before the current batch of deferred dequeues, that includes the very first page in the reservation, hits the threshold for batched processing, so the vm_freelist_add() on that first page has to do a vm_page_dequeue().
Observe that I am not actually calling vm_page_dequeue() before calling vm_phys_free_pages() in various places. Instead, I am relying entirely on a call to vm_page_dequeue() from vm_freelist_add(). The argument for not unconditionally calling vm_page_dequeue() before calling vm_phys_free_pages() is that some fraction of the time, the dequeue can still be deferred because the page, or chunk of pages, that we are freeing will be the "right-hand" buddy to a page that is already in the buddy queues.
May 23 2025
I think that there is another edge case that isn't being handled. Similar to my comment about the vm_page_dequeue after vm_reserv_alloc_page, suppose that a page is allocated from a reservation via vm_page_alloc(), validated, mapped, added to a paging queue, and then later freed back to the reservation. Now, suppose that the reservation is broken and the recently-freed page is passed to the buddy allocator. The lazy dequeue may not have completed yet. I think that reservation breaking will need to perform the vm_page_dequeue on each of the pages being passed to the buddy allocator.
May 18 2025
May 17 2025
May 16 2025
May 13 2025
May 12 2025
May 8 2025
May 7 2025
May 5 2025
May 1 2025
The key here is that all page allocation functions call vm_page_dequeue(), completing any lingering page queue operations that involve the plinks.q field.
Apr 30 2025
Are you going to update the patch to include vm_page_grab_pages()?