Page MenuHomeFreeBSD

Fix arm64 TLB invalidation with non-4k pages
ClosedPublic

Authored by andrew on Mar 10 2022, 3:18 PM.
Tags
None
Referenced Files
Unknown Object (File)
Thu, Apr 18, 3:24 AM
Unknown Object (File)
Tue, Apr 16, 11:35 PM
Unknown Object (File)
Mar 7 2024, 1:56 PM
Unknown Object (File)
Feb 11 2024, 8:33 PM
Unknown Object (File)
Dec 26 2023, 12:35 AM
Unknown Object (File)
Dec 23 2023, 12:15 AM
Unknown Object (File)
Dec 12 2023, 12:36 AM
Unknown Object (File)
Oct 1 2023, 8:18 AM
Subscribers

Details

Summary

When using 16k or 64k pages atop will shift the address by more than
the needed amount for a tlbi instruction. Replace this with a new macro
to shift the address by 12 and use PAGE_SIZE in the for loop to let the
code work with any page size.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

sys/arm64/arm64/pmap.c
363

Some comment explaining how this is to be used is warranted, IMO.

Should we (eventually) have a PAGE_SHIFT_4K constant?

sys/arm64/arm64/pmap.c
363

Mark, rather than a "generic" PAGE_SHIFT_4K, this might better be described as TLBI_VA_SHIFT, as this, i.e., the 12, is particular to how the operand to the tlbi instruction is encoded. In other words, even if you have configured the processor to use a different base page size, the shift here is still going to be 12.

1283

Rather than moving computation inside the loop, I would suggest adding another #define, perhaps called TLBI_VA_INCR, that would be defined as 1ul << (PAGE_SHIFT - TLBI_VA_SHIFT). Then, in the original version the r++ would simply become r += TLBI_VA_INCR.

sys/arm64/arm64/pmap.c
363

I think a TLBI_VA_SHIFT is indeed better, especially in light of your other comment below.

Add TLBI_VA_SHIFT & TLBI_VA_L3_INCR

This revision is now accepted and ready to land.Mar 15 2022, 3:37 PM
This revision was automatically updated to reflect the committed changes.