Page MenuHomeFreeBSD

Ensure that arm64 thread structures are allocated from the direct map.
ClosedPublic

Authored by markj on Feb 29 2020, 5:49 PM.
Tags
None
Referenced Files
Unknown Object (File)
Wed, Nov 20, 5:22 AM
Unknown Object (File)
Tue, Oct 29, 9:15 AM
Unknown Object (File)
Oct 5 2024, 11:32 AM
Unknown Object (File)
Sep 29 2024, 7:56 PM
Unknown Object (File)
Sep 26 2024, 4:11 PM
Unknown Object (File)
Sep 23 2024, 7:30 PM
Unknown Object (File)
Sep 8 2024, 8:21 AM
Unknown Object (File)
Sep 4 2024, 7:29 PM
Subscribers

Details

Summary

Otherwise we can fail to handle translation faults on curthread, leading
to a panic.

I don't really like this solution, but without it I readily get random
panics under QEMU.

Diff Detail

Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 29692
Build 27544: arc lint + arc unit

Event Timeline

The _NOFREE reminds me that we still need a way to segregate _NOFREE allocations in physical memory. Such segregation would most likely provide contiguity inherently.

This revision is now accepted and ready to land.Feb 29 2020, 6:01 PM

Too bad. Are you planning to pursue this further or to move on?

In D23895#525319, @alc wrote:

The _NOFREE reminds me that we still need a way to segregate _NOFREE allocations in physical memory. Such segregation would most likely provide contiguity inherently.

The freepool approach requires some work in order to segregate KVA allocations, I believe, but I think that should be straightforward.

Too bad. Are you planning to pursue this further or to move on?

I would like to but probably won't in the near future - I found this bug while investigating another bug which is blocking a project I'm working on. I will try to return to this later.

In D23895#525319, @alc wrote:

The _NOFREE reminds me that we still need a way to segregate _NOFREE allocations in physical memory. Such segregation would most likely provide contiguity inherently.

The freepool approach requires some work in order to segregate KVA allocations, I believe, but I think that should be straightforward.

Consider this alternative. We create a second backend to kmem allocations that doesn't do normal reservation-based allocations but preemptively allocates the whole 2MB of physical memory and maps it as such before handing out any KVAs from the region. A vmem arena, submap, etc. could then be used to dole out the unused addresses within the region. As a backstop, if we can't allocate a contiguous 2MB of physical memory, we fall back to smaller allocations.

In D23895#525343, @alc wrote:
In D23895#525319, @alc wrote:

The _NOFREE reminds me that we still need a way to segregate _NOFREE allocations in physical memory. Such segregation would most likely provide contiguity inherently.

The freepool approach requires some work in order to segregate KVA allocations, I believe, but I think that should be straightforward.

Consider this alternative. We create a second backend to kmem allocations that doesn't do normal reservation-based allocations but preemptively allocates the whole 2MB of physical memory and maps it as such before handing out any KVAs from the region. A vmem arena, submap, etc. could then be used to dole out the unused addresses within the region. As a backstop, if we can't allocate a contiguous 2MB of physical memory, we fall back to smaller allocations.

I wrote a patch which implements this using a per-domain vmem arena. At the moment it falls back to the regular allocation path if we fail to import 2MB of physical memory: vmem import functions are currently required to return exactly the amount requested. I guess we could work around this with a function to manually add smaller chunks of contiguous memory to the arena when an allocation fails, instead of using the import mechanism.