Page MenuHomeFreeBSD

arm64: Eliminate unnecessary "page walk cache" invalidations
ClosedPublic

Authored by alc on Dec 31 2021, 12:01 AM.
Tags
None
Referenced Files
Unknown Object (File)
Thu, Mar 28, 11:37 AM
Unknown Object (File)
Thu, Feb 29, 3:29 AM
Unknown Object (File)
Feb 14 2024, 8:14 PM
Unknown Object (File)
Feb 2 2024, 7:23 AM
Unknown Object (File)
Feb 2 2024, 7:23 AM
Unknown Object (File)
Jan 26 2024, 1:42 PM
Unknown Object (File)
Jan 10 2024, 2:44 AM
Unknown Object (File)
Dec 31 2023, 5:10 AM
Subscribers

Details

Summary

A feature of arm64's instruction for TLB invalidation is the ability to determine whether cached intermediate entries, i.e., L{0,1,2}_TABLE entries, are invalidated in addition to the final entry, e.g., an L3_PAGE entry.

Update pmap_invalidate_{page,range}() to support both types of invalidation, allowing the caller to determine which type of invalidation is performed.

Update the callers to request the appropriate type of invalidation.

Test Plan

Time in seconds for a "make -j16 buildkernel" on an entire 16-core AWS Graviton 1 machine using a GENERIC-NODEBUG kernel:

x before.a1.4xlarge
+ after.a1.4xlarge
+------------------------------------------------------------------------------+
|                      +                                                       |
|                     ++                                                       |
|                     ++  +  xx      x                                         |
|                   + ++  +  xx    x x             x                           |
|     +             * ++  + +x*  *x*x* xxx  xx     x                           |
|+  + +      ++x x **+*+ **x+x* +******x**  xx++ + x      *  + +x xx+x        x|
|              |________|_M__A______M_A_____|_______|                          |
+------------------------------------------------------------------------------+
    N           Min           Max        Median           Avg        Stddev
x  50        323.91        325.08        324.29        324.33    0.25847275
+  50        323.64         324.9       324.105       324.168    0.26796284
Difference at 95.0% confidence
        -0.162 +/- 0.104462
        -0.0499491% +/- 0.0322085%
        (Student's t, pooled s = 0.263261)

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

alc requested review of this revision.Dec 31 2021, 12:01 AM

Does arm64 use recursive mapping?

Did you tried to look over the ARM cores erratas? I would be not surprised if there are some bugs in some cores.

In D33705#761849, @kib wrote:

Does arm64 use recursive mapping?

No.

Did you tried to look over the ARM cores erratas? I would be not surprised if there are some bugs in some cores.

Yes. In this specific area, i.e., TLB management, there is an ARM erratum, #1286807, affecting Cortex-A76/Neoverse-N1 that I see Linux and Xen implementing workarounds for, but it doesn't impact the feature used in this proposed change. In fact, Linux has been exploiting the same feature of the arm64 TLB invalidation instruction for years.

P.S. Here is a decent description of ARM erratum #1286807 and its workaround: https://patchew.org/Xen/20201116121140.26763-1-michal.orzel@arm.com/

markj added inline comments.
sys/arm64/arm64/pmap.c
1284

Does Graviton (or any other popular platform) implement FEAT_TLBIRANGE? Looks like that's a fairly new extension.

3063

Do we actually need to adjust sva here? pmap_unwire_l3() will have invalidated that page before returning true.

This revision is now accepted and ready to land.Dec 31 2021, 4:14 PM
sys/arm64/arm64/pmap.c
1253

Would it make sense to call this function e.g. pmap_invalidate_page_flags (for instance), and then define pmap_invalidate_page() as pmap_invalidate_page_flags(.., true)? Same for other pmap_invalidate_ functions.

My reasoning is two-fold:

  • this is the signature of pmap_invalidate_ functions for all other pmaps
  • all calls except two (I might have mis-count) pass true
sys/arm64/arm64/pmap.c
1253

The reason that I didn't was out of fear that someone might someday adapt code from, e.g., amd64, and not think about the fact that pmap_invalidate_{page,range}() are semantically different on arm64, because they don't invalidate page walk cache entries.

This is similar to how we have allowed pmap_kenter() to be semantically different across different architectures.

Thoughts?

1284

Graviton 1 (Cortex-A72, just like RPi4) definitely doesn't, and I'm pretty sure that Graviton 2 (Cortex-A76/Neoverse-N1) doesn't either. I don't know about the just announced Graviton 3.

3063

As matter of correctness, no. As an optimization, yes. Right now, we are redundantly, i.e., twice, invalidating the last valid virtual address when we unhook a page table page. I could see replacing sva += L3_SIZE; by a comment explaining why we don't.

sys/arm64/arm64/pmap.c
1253

IMO a comment about not invalidating page mapping structures in the function herald would be useful. Might be symmetric amd64 comment that invalidations do guarantee flush of paging structures cache would be useful as well.

Other than that, I consider it is unlikely that direct copy of some amd64 code would occur, or that it would be nuanced enough to be affected by such semantic difference.

1284
sys/arm64/arm64/pmap.c
1284

I have a patch to support FEAT_TLBIRANGE, however I haven't seen any improvement on current HW that supports it (On Apple M1 under the hypervisor)

Implement Mark's suggested optimization plus a related optimization to handling a singleton valid range as the last mapping in the PTP. Please check the logic carefully.

This revision now requires review to proceed.Dec 31 2021, 7:56 PM
sys/arm64/arm64/pmap.c
1284

I suspect that FEAT_TLBIRANGE matters most on the "big iron" machines where it reduces traffic over the "back office" interconnect used to implement TLB shootdown.

This revision is now accepted and ready to land.Jan 1 2022, 12:00 AM
alc edited the test plan for this revision. (Show Details)

Add comments at the head of pmap_invalidate_{all,page,range}(). In particular, document pmap_invalidate_all() as always invalidating all intermediate-level entries.

This revision now requires review to proceed.Jan 1 2022, 7:16 PM
This revision is now accepted and ready to land.Jan 1 2022, 8:24 PM

@andrew didn't you have a patch for implementing CnP support? If so, could you please post it?