Page MenuHomeFreeBSD

Add largepage support to the arm64 pmap.
ClosedPublic

Authored by markj on Sep 17 2020, 4:35 PM.
Tags
None
Referenced Files
Unknown Object (File)
Tue, May 14, 12:36 PM
Unknown Object (File)
Thu, May 2, 9:44 PM
Unknown Object (File)
Thu, May 2, 8:23 PM
Unknown Object (File)
Thu, May 2, 7:14 PM
Unknown Object (File)
Thu, May 2, 7:10 PM
Unknown Object (File)
Thu, May 2, 7:09 PM
Unknown Object (File)
Thu, May 2, 7:09 PM
Unknown Object (File)
Thu, May 2, 7:09 PM
Subscribers

Details

Diff Detail

Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 33672
Build 30911: arc lint + arc unit

Event Timeline

markj requested review of this revision.Sep 17 2020, 4:35 PM

Harmless typo (L1_BLOCK -> L2_BLOCK in an assertion).

All of my recent amd64 comments apply here as well.

  • Adjust assertion messages.
  • Implement pmap_copy() support.
sys/arm64/arm64/pmap.c
2922

pmap_l0(pmap, sva) -> l0

3476

I'm confused. l1 can't have ATTR_DESCR_VALID set, but the pmap's resident count isn't being incremented here (like it is below in the else).

3500

Can't the l0 entry be invalid? In which case, l1p will be NULL.

3513

See my earlier comment.

4285

Assert that the l1 entry has the "wired" bit set, like we do below?

markj marked 5 inline comments as done.

Address review feedback.

sys/arm64/arm64/pmap.c
3476

You're right, I'm not sure why I didn't just follow what the amd64 implementation does.

sys/arm64/arm64/pmap.c
1267

Hmm. This is wrong for managed mappings. We should be testing whether the page is supposed to be writeable, not whether we've dirtied it.

sys/arm64/arm64/pmap.c
2922–2923

pmap_unuse_pt(pmap, sva, pmap_load(l0), &free);?

3476

Suppose that you are replacing an existing large page mapping. Isn't there still a problem? Because you'll increment the resident count, but you shouldn't.

sys/arm64/arm64/pmap.c
1267

Right. We should be checking (tpte & ATTR_SW_DBM) != 0 I believe.

Does this bug impact correctness? If we fail here because the logically writable mapping is clean, then vm_fault_quick_hold_pages()'s fallback logic will invoke the fault handler and locate the page.

I'm not sure about stage2 translations yet.

markj marked 2 inline comments as done.

Address feedback:

  • Use pmap_unuse_pt().
  • Fix resident page accounting, second attempt.
sys/arm64/arm64/pmap.c
3496

Don't you need to do some kind of dsb() after store, at least for the new mapping ?

sys/arm64/arm64/pmap.c
3496

Yes.

sys/arm64/arm64/pmap.c
3478

I don't understand why we do this unconditionally. We don't use mp except in the then clause of the below if statement.

3514

Ditto.

3533

See my amd64 comment.

Address feedback:

  • When mapping a large page, don't look up the PTP unless we need it.
  • Simplify pmap wired count updates.
  • Unconditionally issue dsb after updating a large page PTE.
sys/arm64/arm64/pmap.c
3520

1G -> 2M

3529

L1_SIZE -> pagesizes[psind]

3532

L1_SIZE -> pagesizes[psind]

Address feedback:

  • Fix assertion message.
  • Correct wired page accounting.
This revision is now accepted and ready to land.Sep 22 2020, 6:50 PM
This revision was automatically updated to reflect the committed changes.