Page MenuHomeFreeBSD

amd64: Introduce KMSAN shadow maps
ClosedPublic

Authored by markj on Jul 23 2021, 11:14 PM.
Tags
None
Referenced Files
Unknown Object (File)
Sun, Mar 24, 5:46 PM
Unknown Object (File)
Jan 27 2024, 12:16 AM
Unknown Object (File)
Jan 14 2024, 8:21 PM
Unknown Object (File)
Jan 11 2024, 10:53 AM
Unknown Object (File)
Jan 2 2024, 8:18 PM
Unknown Object (File)
Dec 23 2023, 9:04 PM
Unknown Object (File)
Dec 21 2023, 3:05 PM
Unknown Object (File)
Dec 20 2023, 5:52 AM
Subscribers

Details

Summary

This is required for KMSAN support, which applies MemorySanitizer[*] to
the kernel.

This looks similar to support for the KASAN shadow map, but has some
differences:


- KMSAN requires two shadow maps: one is to track the initialization
state of memory in the kernel map (referred to as the shadow), and the
origin map, which stores compressed pointers encoding the source of
uninitialized memory. In particular, KMSAN only raises warnings when
uninitialized memory is used as a source operand in operations which
affect a program's behaviour, such as conditional branches. Memory
copies do not raise warnings. So the origin map is only to help
debugging in cases where uninitialized memory is copied many times
before it is used. For some reason LLVM does not make it optional for
the kernel.
- Both shadow maps are 1:1 with the kernel map. In contrast, KASAN's
shadow map is 1:8, so KMSAN has much larger memory overhead.
- I chose to not create shadows of the vm_page array or for memory above
KERNBASE for now. Otherwise, the shadow maps consume a significant
fraction of memory (with KMSAN instrumentation the kernel text is
bloated significantly), and I don't believe that shadowing these
regions will help expose significant bugs. This may be changed later.

This change does the following:


- Reserve PML4 slots for the shadow maps. The origin map overlaps with
the KASAN shadow, this is ok as they won't be configured together.
- Creates a dummy mapping of the vm_page array, mapping the
corresponding region in both shadows to a single page. This is to
avoid extra branching in the KMSAN runtime, which has to determine
for each memory access whether the address falls within the kernel
map.
- Modify pmap_growkernel() to grow the shadows as well, using a function
in the KMSAN runtime which calls pmap_kmsan_enter() for each 4KB page
in the new region.
- Disable unmapped I/O when KMSAN is configured, as it can lead to false
positives if unmapped buffer pages are initialized.
- Modify some kernel memory size estimates to reflect the presence of
the shadows.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 40656
Build 37545: arc lint + arc unit

Event Timeline

I uploaded the runtime in D31296 in case it's of interest, I haven't yet fleshed out the description though.

sys/amd64/amd64/pmap.c
2529

Print how large the squeeze is?

2530

Could this underflow?

8047

Can you patch this somewhere in bufinit(), in MI way?

sys/amd64/include/vmparam.h
175

Are we out of static KVA allocation?

(4TB direct map is already somewhat limiting I believe, it is not impossible to build a machine with >4TB or RAM)

markj marked 2 inline comments as done.
  • Address feedback.
  • Limit nswbuf when KMSAN is enabled, each pbuf consumes 1MB of KVA and thus requires 2MB in the shadow maps.
This revision is now accepted and ready to land.Jul 25 2021, 10:37 PM

unmapped_buf_allowed needs to be set earlier, otherwise
kern_vfs_bio_buffer_alloc() will needlessly allocate a bunch of KVA for
transient mappings.

This revision now requires review to proceed.Jul 26 2021, 2:02 AM
sys/amd64/amd64/pmap.c
439

Currently, wouldn't "is necessary" be more accurate than "can be useful"?

4961

Is there a reason to round "end" both here and inside pmap_kmsan_page_array_startup()?

11504–11505

Is there a reason why you preset PG_M and PG_A for the 4KB mapping below, but not for the 2MB mapping here?

11517–11518

Given the preceding "if" statement, I don't see the reason for this assertion.

sys/amd64/include/vmparam.h
172

Can you double check the end address here? I'm not clear on why it changed.

markj added inline comments.
sys/amd64/amd64/pmap.c
2530

Not unless one changes the values at compile time. Perhaps this should also be asserted?

sys/amd64/include/vmparam.h
172

I think it was a typo.

175

It seems so. We could perhaps start growing the direct map up to 6TB automatically. In this case KMSAN would have to be disabled at boot time. I'll add a static assertion to catch the case where someone tries to simultaneously configure KMSAN and increase NDMPML4E at compile time.

In the longer term I think we'd want to swap the direct map and large map in order to support machines with large RAM. Or wait until LA57 is more widely implemented in hw.

markj marked an inline comment as done.

Address feedback.

This revision is now accepted and ready to land.Jul 29 2021, 12:18 AM