User Details
- User Since
- Jan 29 2022, 5:50 PM (122 w, 3 d)
Mon, May 27
Sun, May 26
Sat, May 25
Looks good!
Mon, May 20
Address @kib 's comment - unmanaged reservations are now distinguished using an invalid vm_object value instead of a separate variable.
Sat, May 18
Update patch.
Update patch and summary.
Reworked patch and updated summary.
Mon, May 13
Address multiple comments:
- enabled UMA_MD_SMALL_ALLOC unconditionally
- removed redundant include guard in ppc uma_machdep
- removed obsolete u_int8_t types
Sun, May 12
Sat, May 11
Remove redundant m == NULL check.
Simplify vm_thread_stack_back page allocation loop.
Wed, May 8
Add missing arm vmparam.h updates.
May 3 2024
Removed stray UMA code from diff.
Address @markj 's comment - uma_small_alloc code deduplication was carved out into a separate revision.
May 2 2024
M_NOFREE was already taken by sys/mbuf.h, M_NEVERFREED is one of the alternative names proposed by @alc .
May 1 2024
Apr 30 2024
Apr 10 2024
Apr 8 2024
As promised, I've removed some redundant bits from the patch, it should be a bit clearer now.
Regenerate and simplify patch.
Apr 7 2024
Apr 4 2024
Address @markj 's comments and fix a couple of issues:
- Certain kstack KVA chunks were released back to the parent arena with improper alignment, causing vmem_xfree to panic
- swapping in thread kstacks triggered a panic as the former code relied on vm_page_grab_pages
Apr 3 2024
Sorry for the delay, @markj reported a panic when booting this patch on a NUMA machine and it took me a while to set up a NUMA environment and properly fix the issue.
Update patch to track and properly release kstacks to domain arenas, address @kib 's comments.
Apr 2 2024
You're right, the primary goal was to have a way of faking NUMA topologies in a guest for kernel testing purposes. I did consider the second goal but ultimately decided to focus on the "fake" bits first and implement the rest in a separate patch.
I'll rework the patch so that it covers both goals.
It also appears to assume that each domain can be described with a single PA range, and I don't really understand why vmm needs to know the CPU affinity of each domain.
I'm not that happy about directly specifying PA ranges directly. The only other thing I could think of is to let the user specify the amount of memory per-domain and let bhyve deal with PA ranges, do you think that this is a more sane approach?
As for the CPU affinities, these are needed for SRAT but that can be done purely from userspace. I've kept them in vmm in case we might want to get NUMA topology info using bhyvectl, but I guess that information can be obtained from the guest itself. I'll remove the cpusets.
Mar 30 2024
Mar 29 2024
Mar 28 2024
Mar 22 2024
Address @kib 's comments.
Mar 21 2024
Remove commented lines.
Mar 15 2024
Apologies for the delayed response.
@markj and I will go through the patch once more during next week, so it should get commited soon.
Feb 19 2024
Committed in R9:607c4b857a65b68289e9d7f86a8855def93c09f0.
Feb 18 2024
Feb 3 2024
Address @kib 's comments.
Jan 26 2024
Jan 6 2024
Aside from the issue @markj pointed out, LGTM.
Dec 12 2023
Address @jhb 's comments.
Nov 30 2023
Address @jhb 's comments.
Nov 29 2023
Address @markj 's comments:
- multiline comment style fixes
- use old mapping scheme on 32-bit systems
Nov 27 2023
Address @alc 's comments.
Nov 13 2023
Upload diff with full context.
Nov 8 2023
Nov 7 2023
Rebased the patch.