Page MenuHomeFreeBSD

Fix arm64 TLB invalidation with non-4k pages
ClosedPublic

Authored by andrew on Mar 10 2022, 3:18 PM.
Tags
None
Referenced Files
Unknown Object (File)
Thu, Feb 12, 2:52 PM
Unknown Object (File)
Sun, Feb 8, 10:21 AM
Unknown Object (File)
Sun, Feb 8, 3:11 AM
Unknown Object (File)
Sat, Feb 7, 8:11 PM
Unknown Object (File)
Sat, Jan 31, 1:49 AM
Unknown Object (File)
Jan 22 2026, 11:01 PM
Unknown Object (File)
Jan 14 2026, 6:20 PM
Unknown Object (File)
Jan 14 2026, 11:09 AM
Subscribers

Details

Summary

When using 16k or 64k pages atop will shift the address by more than
the needed amount for a tlbi instruction. Replace this with a new macro
to shift the address by 12 and use PAGE_SIZE in the for loop to let the
code work with any page size.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 44777
Build 41665: arc lint + arc unit

Event Timeline

sys/arm64/arm64/pmap.c
363

Some comment explaining how this is to be used is warranted, IMO.

Should we (eventually) have a PAGE_SHIFT_4K constant?

sys/arm64/arm64/pmap.c
363

Mark, rather than a "generic" PAGE_SHIFT_4K, this might better be described as TLBI_VA_SHIFT, as this, i.e., the 12, is particular to how the operand to the tlbi instruction is encoded. In other words, even if you have configured the processor to use a different base page size, the shift here is still going to be 12.

1283

Rather than moving computation inside the loop, I would suggest adding another #define, perhaps called TLBI_VA_INCR, that would be defined as 1ul << (PAGE_SHIFT - TLBI_VA_SHIFT). Then, in the original version the r++ would simply become r += TLBI_VA_INCR.

sys/arm64/arm64/pmap.c
363

I think a TLBI_VA_SHIFT is indeed better, especially in light of your other comment below.

Add TLBI_VA_SHIFT & TLBI_VA_L3_INCR

This revision is now accepted and ready to land.Mar 15 2022, 3:37 PM
This revision was automatically updated to reflect the committed changes.