Page MenuHomeFreeBSD

Ensure that arm64 thread structures are allocated from the direct map.
ClosedPublic

Authored by markj on Feb 29 2020, 5:49 PM.
Tags
None
Referenced Files
Unknown Object (File)
Mar 4 2024, 12:56 PM
Unknown Object (File)
Dec 30 2023, 8:12 AM
Unknown Object (File)
Dec 23 2023, 4:13 AM
Unknown Object (File)
Nov 7 2023, 5:42 AM
Unknown Object (File)
Sep 16 2023, 1:25 AM
Unknown Object (File)
Sep 7 2023, 1:19 AM
Unknown Object (File)
Sep 7 2023, 1:19 AM
Unknown Object (File)
Sep 7 2023, 1:18 AM
Subscribers

Details

Summary

Otherwise we can fail to handle translation faults on curthread, leading
to a panic.

I don't really like this solution, but without it I readily get random
panics under QEMU.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

The _NOFREE reminds me that we still need a way to segregate _NOFREE allocations in physical memory. Such segregation would most likely provide contiguity inherently.

This revision is now accepted and ready to land.Feb 29 2020, 6:01 PM

Too bad. Are you planning to pursue this further or to move on?

In D23895#525319, @alc wrote:

The _NOFREE reminds me that we still need a way to segregate _NOFREE allocations in physical memory. Such segregation would most likely provide contiguity inherently.

The freepool approach requires some work in order to segregate KVA allocations, I believe, but I think that should be straightforward.

Too bad. Are you planning to pursue this further or to move on?

I would like to but probably won't in the near future - I found this bug while investigating another bug which is blocking a project I'm working on. I will try to return to this later.

In D23895#525319, @alc wrote:

The _NOFREE reminds me that we still need a way to segregate _NOFREE allocations in physical memory. Such segregation would most likely provide contiguity inherently.

The freepool approach requires some work in order to segregate KVA allocations, I believe, but I think that should be straightforward.

Consider this alternative. We create a second backend to kmem allocations that doesn't do normal reservation-based allocations but preemptively allocates the whole 2MB of physical memory and maps it as such before handing out any KVAs from the region. A vmem arena, submap, etc. could then be used to dole out the unused addresses within the region. As a backstop, if we can't allocate a contiguous 2MB of physical memory, we fall back to smaller allocations.

In D23895#525343, @alc wrote:
In D23895#525319, @alc wrote:

The _NOFREE reminds me that we still need a way to segregate _NOFREE allocations in physical memory. Such segregation would most likely provide contiguity inherently.

The freepool approach requires some work in order to segregate KVA allocations, I believe, but I think that should be straightforward.

Consider this alternative. We create a second backend to kmem allocations that doesn't do normal reservation-based allocations but preemptively allocates the whole 2MB of physical memory and maps it as such before handing out any KVAs from the region. A vmem arena, submap, etc. could then be used to dole out the unused addresses within the region. As a backstop, if we can't allocate a contiguous 2MB of physical memory, we fall back to smaller allocations.

I wrote a patch which implements this using a per-domain vmem arena. At the moment it falls back to the regular allocation path if we fail to import 2MB of physical memory: vmem import functions are currently required to return exactly the amount requested. I guess we could work around this with a function to manually add smaller chunks of contiguous memory to the arena when an allocation fails, instead of using the import mechanism.