Page MenuHomeFreeBSD

Fix arm64 TLB invalidation with non-4k pages
ClosedPublic

Authored by andrew on Mar 10 2022, 3:18 PM.
Tags
None
Referenced Files
F109026271: D34516.diff
Thu, Jan 30, 7:35 PM
Unknown Object (File)
Wed, Jan 29, 6:18 AM
Unknown Object (File)
Fri, Jan 24, 5:29 PM
Unknown Object (File)
Sat, Jan 18, 9:28 PM
Unknown Object (File)
Sat, Jan 11, 8:43 PM
Unknown Object (File)
Sat, Jan 11, 10:58 AM
Unknown Object (File)
Dec 20 2024, 10:41 AM
Unknown Object (File)
Dec 13 2024, 2:40 AM
Subscribers

Details

Summary

When using 16k or 64k pages atop will shift the address by more than
the needed amount for a tlbi instruction. Replace this with a new macro
to shift the address by 12 and use PAGE_SIZE in the for loop to let the
code work with any page size.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 44732
Build 41620: arc lint + arc unit

Event Timeline

sys/arm64/arm64/pmap.c
363

Some comment explaining how this is to be used is warranted, IMO.

Should we (eventually) have a PAGE_SHIFT_4K constant?

sys/arm64/arm64/pmap.c
363

Mark, rather than a "generic" PAGE_SHIFT_4K, this might better be described as TLBI_VA_SHIFT, as this, i.e., the 12, is particular to how the operand to the tlbi instruction is encoded. In other words, even if you have configured the processor to use a different base page size, the shift here is still going to be 12.

1288

Rather than moving computation inside the loop, I would suggest adding another #define, perhaps called TLBI_VA_INCR, that would be defined as 1ul << (PAGE_SHIFT - TLBI_VA_SHIFT). Then, in the original version the r++ would simply become r += TLBI_VA_INCR.

sys/arm64/arm64/pmap.c
363

I think a TLBI_VA_SHIFT is indeed better, especially in light of your other comment below.

Add TLBI_VA_SHIFT & TLBI_VA_L3_INCR

This revision is now accepted and ready to land.Mar 15 2022, 3:37 PM
This revision was automatically updated to reflect the committed changes.