- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Wed, Apr 10
Mon, Apr 8
As promised, I've removed some redundant bits from the patch, it should be a bit clearer now.
Regenerate and simplify patch.
Sun, Apr 7
Thu, Apr 4
Address @markj 's comments and fix a couple of issues:
- Certain kstack KVA chunks were released back to the parent arena with improper alignment, causing vmem_xfree to panic
Wed, Apr 3
Sorry for the delay, @markj reported a panic when booting this patch on a NUMA machine and it took me a while to set up a NUMA environment and properly fix the issue.
Update patch to track and properly release kstacks to domain arenas, address @kib 's comments.
Tue, Apr 2
In D44565#1016805, @markj wrote:It's not clear to me why we don't extend the vm_memmap structure instead.
Stepping back for a second, the goal of this patch is not really clear to me. I can see two possibilities:
- We want to create a fake NUMA topology, e.g., to make it easier to use bhyve to test NUMA-specific features in guest kernels.
- We want some way to have bhyve/vmm allocate memory from multiple physical NUMA domains on the host, and pass memory affinity information to the guest. In that case, vmm itself needs to ensure, for example, that the VM object for a given memseg has the correct NUMA allocation policy.
I think this patch ignores the second goal and makes it harder to implement in the future.
You're right, the primary goal was to have a way of faking NUMA topologies in a guest for kernel testing purposes. I did consider the second goal but ultimately decided to focus on the "fake" bits first and implement the rest in a separate patch.
I'll rework the patch so that it covers both goals.
It also appears to assume that each domain can be described with a single PA range, and I don't really understand why vmm needs to know the CPU affinity of each domain.
I'm not that happy about directly specifying PA ranges directly. The only other thing I could think of is to let the user specify the amount of memory per-domain and let bhyve deal with PA ranges, do you think that this is a more sane approach?
As for the CPU affinities, these are needed for SRAT but that can be done purely from userspace. I've kept them in vmm in case we might want to get NUMA topology info using bhyvectl, but I guess that information can be obtained from the guest itself. I'll remove the cpusets.
Sat, Mar 30
Fri, Mar 29
Thu, Mar 28
Fri, Mar 22
Address @kib 's comments.
Mar 21 2024
Remove commented lines.
Mar 15 2024
Apologies for the delayed response.
@markj and I will go through the patch once more during next week, so it should get commited soon.
Feb 19 2024
Committed in R9:607c4b857a65b68289e9d7f86a8855def93c09f0.
Feb 18 2024
Feb 3 2024
Address @kib 's comments.
Jan 26 2024
Jan 6 2024
Aside from the issue @markj pointed out, LGTM.
Dec 12 2023
Address @jhb 's comments.
Nov 30 2023
In D42405#976891, @jhb wrote:If you merge this first, then the breakpoint patch doesn't need 'handled = 0', right?
Address @jhb 's comments.
Nov 29 2023
In D38852#976094, @markj wrote:In D38852#975886, @bojan.novkovic_fer.hr wrote:In D38852#957605, @markj wrote:One other open issue that we didn't discuss much yet is what to do on 32-bit systems. Since the kernel virtual address space is much more limited there, it perhaps (probably?) doesn't make much sense to change the existing scheme. This is especially true if we start trying to align virtual mappings of stacks as Alan suggests.
Given the recent decision to deprecate and potentially remove support for 32-bit systems with 15.0, do you think that this is still an issue we should address?
Yes - the actual removal of 32-bit kernels isn't going to happen anytime soon. 15.0 is two years out, and it's not entirely clear to me that we're ready to remove 32-bit ARM support.
But, can't we handle this simply by making vm_kstack_pindex() use the old KVA<->pindex mapping scheme if _ILP32 is defined?
Address @markj 's comments:
- multiline comment style fixes
- use old mapping scheme on 32-bit systems
Nov 27 2023
In D38852#957605, @markj wrote:One other open issue that we didn't discuss much yet is what to do on 32-bit systems. Since the kernel virtual address space is much more limited there, it perhaps (probably?) doesn't make much sense to change the existing scheme. This is especially true if we start trying to align virtual mappings of stacks as Alan suggests.
Address @alc 's comments.
Nov 13 2023
Upload diff with full context.
Nov 8 2023
In D41633#969928, @markj wrote:Would you like to do the honours of reverting the vm_fault change? :)
Nov 7 2023
In D41633#969403, @markj wrote:Sorry for the delayed reply. The above-mentioned riscv pmap commits are in main now - @bojan.novkovic_fer.hr would it be possible to rebase on top of that? Then I can commit this patch, and we can finally revert D19670.
Rebased the patch.
Oct 30 2023
Address @corvink 's comments.
Address @corvink 's comments.
Oct 26 2023
In D40772#966839, @eugen_grosbein.net wrote:Note this became more important since we have ASLR turned on for 64 bit processes since 13.2-RELEASE. And ASLR adds great deal of fragmentation. It leads to significant performance degradation over long run due to superpages becoming unusable due memory fragmentation.
Oct 19 2023
Oct 12 2023
Oct 11 2023
Fix formatting for multiline comment in teken_utf8_bytes_to_codepoint.
Address @christos 's comments.
- Add more detailed explanation of the use of __builtin_clz
Other fixes:
- Codepoint calculation for two-byte sequences was missing one bit in the mask used for the leading character, fixed now
- ttydisc_rubchar now falls back to non-UTF8 behaviour if teken_wcwidth returns an error
Oct 10 2023
Oct 7 2023
In D42067#960711, @christos wrote:Tested both patches and they seem to run without problems. Is there a reason we don't want the IUTF8 flag to be set by default? At least in my opinion, backspacing UTF-8 characters is common enough that this should be a "builtin" feature, instead of having to run stty iutf8 in a startup script or do it manually. That being said, I am not exactly aware of the side effects (if any) this could have.
I've updated the manpages for stty and termios and moved IUTF8 behind __BSD_VISIBLE.
Oct 6 2023
Quick update: I've tested the patch with different numbers of kstack guard pages (1-4) and encountered no issues.
Each run consisted of running the test suite and a rebuild of the whole kernel.
I have, there are no duplicates for 1-4 guard pages.
I also think that duplicates are mathematically impossible in this case, since this function is bijective for each value in its domain.
Its domain, however is not contiguous.
I've whipped up a quick graph for the first few kvas {F68913014}.
Oct 5 2023
Address @christos 's comments - properly handle zero width characters.
Oct 3 2023
Oct 1 2023
@markj I tried the new mlock superpage test case, the patch didn't cause a panic.
I've updated the diff to work with an arbitrary number of guard pages but I didn't have time to test this with different guard page configurations - I'll do this in the coming days and report back.
The formula has been slightly reworked to account for the direction of stack growth (+ 1 to the result of lin_pindex(kva) / kstack_size and different condition for detecting guard pages).
Sep 27 2023
In D38852#956753, @kib wrote:So what is the main purpose of this change? To get linear, without gaps, pindexes for the kernel stack obj?
In D41635#957353, @alc wrote:I've just committed 902ed64fecbe which eliminates the need for zeroing the PTP and the possibility of an assertion failure if you pass the same arguments to pmap_insert_pt_page() as you did on amd64. pmap_insert_pt_page() now has the needed extra argument.
Address @alc 's comments.
Sep 22 2023
Sep 20 2023
Address @alc 's comments.