Page MenuHomeFreeBSD

Add largepage support to the arm64 pmap.
ClosedPublic

Authored by markj on Sep 17 2020, 4:35 PM.
Tags
None
Referenced Files
F84768656: D26466.diff
Tue, May 28, 8:53 AM
Unknown Object (File)
Tue, May 14, 12:36 PM
Unknown Object (File)
Thu, May 2, 9:44 PM
Unknown Object (File)
Thu, May 2, 8:23 PM
Unknown Object (File)
Thu, May 2, 7:14 PM
Unknown Object (File)
Thu, May 2, 7:10 PM
Unknown Object (File)
Thu, May 2, 7:09 PM
Unknown Object (File)
Thu, May 2, 7:09 PM
Subscribers

Details

Diff Detail

Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 33629
Build 30872: arc lint + arc unit

Event Timeline

markj requested review of this revision.Sep 17 2020, 4:35 PM

Harmless typo (L1_BLOCK -> L2_BLOCK in an assertion).

All of my recent amd64 comments apply here as well.

  • Adjust assertion messages.
  • Implement pmap_copy() support.
sys/arm64/arm64/pmap.c
2923

pmap_l0(pmap, sva) -> l0

3479

I'm confused. l1 can't have ATTR_DESCR_VALID set, but the pmap's resident count isn't being incremented here (like it is below in the else).

3503

Can't the l0 entry be invalid? In which case, l1p will be NULL.

3516

See my earlier comment.

4291

Assert that the l1 entry has the "wired" bit set, like we do below?

markj marked 5 inline comments as done.

Address review feedback.

sys/arm64/arm64/pmap.c
3479

You're right, I'm not sure why I didn't just follow what the amd64 implementation does.

sys/arm64/arm64/pmap.c
1267

Hmm. This is wrong for managed mappings. We should be testing whether the page is supposed to be writeable, not whether we've dirtied it.

sys/arm64/arm64/pmap.c
2923–2924

pmap_unuse_pt(pmap, sva, pmap_load(l0), &free);?

3479

Suppose that you are replacing an existing large page mapping. Isn't there still a problem? Because you'll increment the resident count, but you shouldn't.

sys/arm64/arm64/pmap.c
1267

Right. We should be checking (tpte & ATTR_SW_DBM) != 0 I believe.

Does this bug impact correctness? If we fail here because the logically writable mapping is clean, then vm_fault_quick_hold_pages()'s fallback logic will invoke the fault handler and locate the page.

I'm not sure about stage2 translations yet.

markj marked 2 inline comments as done.

Address feedback:

  • Use pmap_unuse_pt().
  • Fix resident page accounting, second attempt.
sys/arm64/arm64/pmap.c
3499

Don't you need to do some kind of dsb() after store, at least for the new mapping ?

sys/arm64/arm64/pmap.c
3499

Yes.

sys/arm64/arm64/pmap.c
3481

I don't understand why we do this unconditionally. We don't use mp except in the then clause of the below if statement.

3517

Ditto.

3536

See my amd64 comment.

Address feedback:

  • When mapping a large page, don't look up the PTP unless we need it.
  • Simplify pmap wired count updates.
  • Unconditionally issue dsb after updating a large page PTE.
sys/arm64/arm64/pmap.c
3523

1G -> 2M

3532

L1_SIZE -> pagesizes[psind]

3535

L1_SIZE -> pagesizes[psind]

Address feedback:

  • Fix assertion message.
  • Correct wired page accounting.
This revision is now accepted and ready to land.Sep 22 2020, 6:50 PM
This revision was automatically updated to reflect the committed changes.