Page MenuHomeFreeBSD

arm64: Reuse the PV entry when updating a mapping in pmap_enter().
ClosedPublic

Authored by markj on Jul 8 2018, 5:10 PM.
Tags
None
Referenced Files
Unknown Object (File)
Tue, Nov 19, 11:09 PM
Unknown Object (File)
Tue, Nov 19, 12:57 AM
Unknown Object (File)
Sep 22 2024, 3:54 AM
Unknown Object (File)
Sep 22 2024, 12:01 AM
Unknown Object (File)
Sep 18 2024, 12:49 AM
Unknown Object (File)
Sep 5 2024, 2:53 AM
Unknown Object (File)
Sep 1 2024, 7:27 PM
Unknown Object (File)
Aug 22 2024, 8:04 AM
Subscribers

Details

Summary

Because arm64 updates page table entries using break-before-make, it's
not susceptible to the problem fixed by r329254. However, it's still
susceptible to the problem with PV entry reclamation fixed in r335784
for amd64. Fix that issue in the same way that we did for amd64.

I note that pmap_update_entry() does not return the old value of the L3
entry after invalidation, so I believe there was a small race where the
accessed or modified bits could have been set by the hardware between
the read of orig_l3 and the invalidation. This change addresses that
too.

The original code disables interrupts during the update. I don't see a
reason to do that for userspace mappings, so we don't do that in this
change.

Test Plan

I ran some buildworld/buildkernels with this change, and some stress2 tests
(including the one used by Peter to reveal the initial problem). I don't think
Peter has the ability to run stress2 on arm64 at the moment, so this hasn't
gone through the more intensive testing that was performed on amd64.

Diff Detail

Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 17911
Build 17669: arc lint + arc unit

Event Timeline

markj added reviewers: alc, kib, andrew.
sys/arm64/arm64/pmap.c
2979

I believe that this assignment is unnecessary.

3028
/*
 * The physical page has changed.
 */
3066–3067

I would suggest moving the setting of ATTR_SW_MANAGED outside the critical section like we do on amd64.

3100

KASSERT(opa == pa, ...), if only as documentation.

I've been reviewing the various pmap implementations, and here are my conclusions thus far.

                Uses            Has PV Alloc    Has COW         Needs r324665
                pv_chunk        Problem         Bug             and r325285
                -------------------------------------------------------------
amd64           Yes             Fixed           Fixed           Fixed
arm/pmap-v4.c   No              N/A             No[1]           N/A
arm/pmap-v6.c   Yes             No              No[2]           No[4]
arm64           Yes             Yes             No[2]           Yes
i386            Yes             No              Fixed           No[4]
mips            Yes             No              Yes             No[4]
powerpc/booke   No              N/A             No[2]           N/A
powerpc/oea     No              No[3]           No[2]           N/A
powerpc/oea64   No                                              N/A
riscv           Yes             Yes                             No[4]
sparc64         No              N/A                             N/A

[1] SMP is not supported.
[2] Performs "break-before-make".
[3] The comments say that the PV entry is reused, but it is not.  That said,
    the old PV entry is freed before the new one is allocated.  I believe
    that reuse could be beneficial because it would eliminate two O(log n)
    Red-Black tree operations.
[4] Still has pvh_global_lock.
markj marked 4 inline comments as done.Jul 8 2018, 6:15 PM
markj added inline comments.
sys/arm64/arm64/pmap.c
3102

I think we should have

orig_l3 = pmap_load_store(l3, new_l3);

here.

markj marked an inline comment as done.

Address review comments.

sys/arm64/arm64/pmap.c
3102

My recollection of older 32-bit arm is that the page table walk on a TLB miss was performed by hardware but the equivalent of setting PG_M required a trap to software. That might require the pmap lock, in which case the PTE's dirty status can't change once pmap_enter() grabs the pmap lock.

You stated that powerpc/oea does not use pv chunks, but the note 3 discusses the pv use.

This revision is now accepted and ready to land.Jul 8 2018, 6:36 PM
sys/arm64/arm64/pmap.c
3048

(new_l3 & ATTR_SW_MANAGED) == 0

3066–3068

(new_l3 & ATTR_SW_MANAGED) != 0

In D16181#343131, @kib wrote:

You stated that powerpc/oea does not use pv chunks, but the note 3 discusses the pv use.

Yes. powerpc/oea doesn't have the allocation problem that markj@ and pho@ discovered, but the code could be optimized. Rather than recycling the soon to be unused PV entry, oea actually removes it from the red-black tree and frees it.

sys/arm64/arm64/pmap.c
3102

armv8 implementations may support hardware updates of the dirty bit. It seems we don't support that at the moment, so I'll remove the assignment I added.

  • Remove additional assignments to orig_l3.
This revision now requires review to proceed.Jul 8 2018, 7:06 PM
This revision is now accepted and ready to land.Jul 8 2018, 7:18 PM
This revision was automatically updated to reflect the committed changes.