Page MenuHomeFreeBSD

Move the kernel virtual memory around to support 2TB of DMAP
ClosedPublic

Authored by andrew on Apr 6 2016, 2:28 PM.
Tags
None
Referenced Files
Unknown Object (File)
Jan 15 2024, 9:27 PM
Unknown Object (File)
Jan 8 2024, 5:00 PM
Unknown Object (File)
Jan 8 2024, 5:00 PM
Unknown Object (File)
Jan 8 2024, 5:00 PM
Unknown Object (File)
Jan 8 2024, 4:48 PM
Unknown Object (File)
Jan 8 2024, 9:07 AM
Unknown Object (File)
Dec 20 2023, 5:07 AM
Unknown Object (File)
Dec 20 2023, 2:36 AM
Subscribers

Details

Summary

This increases the kernel address space to 512GB, and the DMAP region
to 2TB. The latter can be increased in 512GB chunks by adjusting the lower
address, however more work will be needed to increase the former.

Test Plan

So far tested on qemu, still needs testing on hardware.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

andrew retitled this revision from to Move the kernel virtual memory around to support 2TB of DMAP.
andrew updated this object.
andrew edited the test plan for this revision. (Show Details)
andrew added reviewers: arm64, kib.
andrew added a subscriber: emaste.

I have a general question about one aspect of DMAP on arm64. The patch does not change it much. I looked at "B2.9 Mismatched memory attributes" section of the ARM for ARMv8, and it seems that our use of DMAP is not compatible with the requirements there. E.g., if any device memory which must be non-cacheable strongly ordered falls into the L1-mapped tail of the physical memory, I read the spec as stating that the guarantees are off due to existing aliased mapping with incompatible attributes.

On amd64, this is handled by avoiding over-mapping by DMAP, and splitting superpages from DMAP if pmap_mapdev() falls into DMAP range at runtime. On arm64, the page attributes support seems to be rudimentary if ever existing. But I suspect that the issue of mis-matched attributes for aliasing is real, and always mapping 2TB could make DMAP more often hit device memory (no idea about real platforms physical address layout).

sys/arm64/include/vmparam.h
151 ↗(On Diff #14927)

IMO it would be very useful to add a map of the KVA, similar to one in the amd64/include/vmparam.h. The arm64 map looks simpler, but still, when debugging an issue and seeing raw address from backtrace or registers, it is much easier to look at the map than finding matching pair of defines.

162 ↗(On Diff #14927)

The change of DMA_MAX_ADDRESS semantic from the real max address to (max_address + 1) is subtle. E.g., it causes change to the minidump header value. Is libkvm ready for that ?

This is a known issue with the pmap code. I'm planning on fixing it with adding support for superpages. The fix will need us to split the 1GB blocks we currently use when we map anything other than cacheable normal memory.

I can look at only limiting the DMAP region to be closer to the RAM we have, the start is already done, it will just need a change to stop earlier. It may also need us to not map addresses that ar enot backed by RAM, however I know of no hardware that places device memory between two RAM blocks so this is less of an issue.

sys/arm64/include/vmparam.h
162 ↗(On Diff #14927)

By my reading of libkvm it fixes it. In _aarch64_minidump_vatop we have the following:

if (va >= vm->hdr.dmapbase && va < vm->hdr.dmapend) {

This is the only place in libkvm I can see that uses the max address other than fixing the endianness. This will now correctly accept the last DMAP address.

kib edited edge metadata.
This revision is now accepted and ready to land.Apr 6 2016, 4:52 PM
This revision was automatically updated to reflect the committed changes.