Page MenuHomeFreeBSD

Add largepage support to the arm64 pmap.
ClosedPublic

Authored by markj on Sep 17 2020, 4:35 PM.
Tags
None
Referenced Files
Unknown Object (File)
Thu, May 2, 9:44 PM
Unknown Object (File)
Thu, May 2, 8:23 PM
Unknown Object (File)
Thu, May 2, 7:14 PM
Unknown Object (File)
Thu, May 2, 7:10 PM
Unknown Object (File)
Thu, May 2, 7:09 PM
Unknown Object (File)
Thu, May 2, 7:09 PM
Unknown Object (File)
Thu, May 2, 7:09 PM
Unknown Object (File)
Thu, May 2, 3:55 PM
Subscribers

Details

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

markj requested review of this revision.Sep 17 2020, 4:35 PM

Harmless typo (L1_BLOCK -> L2_BLOCK in an assertion).

All of my recent amd64 comments apply here as well.

  • Adjust assertion messages.
  • Implement pmap_copy() support.
sys/arm64/arm64/pmap.c
2923 ↗(On Diff #77191)

pmap_l0(pmap, sva) -> l0

3479 ↗(On Diff #77191)

I'm confused. l1 can't have ATTR_DESCR_VALID set, but the pmap's resident count isn't being incremented here (like it is below in the else).

3503 ↗(On Diff #77191)

Can't the l0 entry be invalid? In which case, l1p will be NULL.

3516 ↗(On Diff #77191)

See my earlier comment.

4291 ↗(On Diff #77191)

Assert that the l1 entry has the "wired" bit set, like we do below?

markj marked 5 inline comments as done.

Address review feedback.

sys/arm64/arm64/pmap.c
3479 ↗(On Diff #77191)

You're right, I'm not sure why I didn't just follow what the amd64 implementation does.

sys/arm64/arm64/pmap.c
1267 ↗(On Diff #77220)

Hmm. This is wrong for managed mappings. We should be testing whether the page is supposed to be writeable, not whether we've dirtied it.

sys/arm64/arm64/pmap.c
2923–2924 ↗(On Diff #77220)

pmap_unuse_pt(pmap, sva, pmap_load(l0), &free);?

3479 ↗(On Diff #77191)

Suppose that you are replacing an existing large page mapping. Isn't there still a problem? Because you'll increment the resident count, but you shouldn't.

sys/arm64/arm64/pmap.c
1267 ↗(On Diff #77220)

Right. We should be checking (tpte & ATTR_SW_DBM) != 0 I believe.

Does this bug impact correctness? If we fail here because the logically writable mapping is clean, then vm_fault_quick_hold_pages()'s fallback logic will invoke the fault handler and locate the page.

I'm not sure about stage2 translations yet.

markj marked 2 inline comments as done.

Address feedback:

  • Use pmap_unuse_pt().
  • Fix resident page accounting, second attempt.
sys/arm64/arm64/pmap.c
3496 ↗(On Diff #77230)

Don't you need to do some kind of dsb() after store, at least for the new mapping ?

sys/arm64/arm64/pmap.c
3496 ↗(On Diff #77230)

Yes.

sys/arm64/arm64/pmap.c
3478 ↗(On Diff #77230)

I don't understand why we do this unconditionally. We don't use mp except in the then clause of the below if statement.

3514 ↗(On Diff #77230)

Ditto.

3533 ↗(On Diff #77230)

See my amd64 comment.

Address feedback:

  • When mapping a large page, don't look up the PTP unless we need it.
  • Simplify pmap wired count updates.
  • Unconditionally issue dsb after updating a large page PTE.
sys/arm64/arm64/pmap.c
3520 ↗(On Diff #77259)

1G -> 2M

3529 ↗(On Diff #77259)

L1_SIZE -> pagesizes[psind]

3532 ↗(On Diff #77259)

L1_SIZE -> pagesizes[psind]

Address feedback:

  • Fix assertion message.
  • Correct wired page accounting.
This revision is now accepted and ready to land.Sep 22 2020, 6:50 PM
This revision was automatically updated to reflect the committed changes.