Page MenuHomeFreeBSD

Fix arm64 TLB invalidation with non-4k pages
ClosedPublic

Authored by andrew on Mar 10 2022, 3:18 PM.

Details

Summary

When using 16k or 64k pages atop will shift the address by more than
the needed amount for a tlbi instruction. Replace this with a new macro
to shift the address by 12 and use PAGE_SIZE in the for loop to let the
code work with any page size.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Automatic diff as part of commit; lint not applicable.
Unit
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

sys/arm64/arm64/pmap.c
362

Some comment explaining how this is to be used is warranted, IMO.

Should we (eventually) have a PAGE_SHIFT_4K constant?

sys/arm64/arm64/pmap.c
362

Mark, rather than a "generic" PAGE_SHIFT_4K, this might better be described as TLBI_VA_SHIFT, as this, i.e., the 12, is particular to how the operand to the tlbi instruction is encoded. In other words, even if you have configured the processor to use a different base page size, the shift here is still going to be 12.

1283

Rather than moving computation inside the loop, I would suggest adding another #define, perhaps called TLBI_VA_INCR, that would be defined as 1ul << (PAGE_SHIFT - TLBI_VA_SHIFT). Then, in the original version the r++ would simply become r += TLBI_VA_INCR.

sys/arm64/arm64/pmap.c
362

I think a TLBI_VA_SHIFT is indeed better, especially in light of your other comment below.

Add TLBI_VA_SHIFT & TLBI_VA_L3_INCR

This revision is now accepted and ready to land.Mar 15 2022, 3:37 PM
This revision was automatically updated to reflect the committed changes.