Page MenuHomeFreeBSD

Various arm64 pmap fixes (and one optimization)
ClosedPublic

Authored by alc on Jun 10 2019, 5:05 AM.
Tags
None
Referenced Files
Unknown Object (File)
Sat, Dec 14, 6:04 PM
Unknown Object (File)
Fri, Dec 6, 1:45 PM
Unknown Object (File)
Nov 24 2024, 6:58 AM
Unknown Object (File)
Nov 12 2024, 6:47 PM
Unknown Object (File)
Nov 5 2024, 5:06 PM
Unknown Object (File)
Oct 22 2024, 9:40 PM
Unknown Object (File)
Oct 18 2024, 7:31 AM
Unknown Object (File)
Oct 5 2024, 10:42 AM
Subscribers

Details

Summary

Change pmap_demote_l2_locked() so that it removes the superpage mapping on a demotion failure. Otherwise, some callers to pmap_demote_l2_locked(), such as pmap_protect(), may leave an incorrect mapping in place on a demotion failure.

Change pmap_demote_l2_locked() so that it handles addresses that are not superpage aligned. Some callers to pmap_demote_l2_locked(), such as pmap_protect(), may not pass a superpage aligned address.

Change pmap_enter_l2() so that it correctly calls vm_page_free_pages_toq(). The arm64 pmap is updating the count of wired pages when freeing page table pages, so pmap_enter_l2() should pass false to vm_page_free_pages_toq().

Optimize TLB invalidation in pmap_remove_l2().

Test Plan

This patch survived a "make -j4 buildworld" on an Amazon EC2 a1.xlarge (4 cores, 8 GB DRAM) instance.

Diff Detail

Lint
Lint Skipped
Unit
Tests Skipped

Event Timeline

This revision is now accepted and ready to land.Jun 10 2019, 12:17 PM
arm64/arm64/pmap.c
3400

This was essentially a cut-and-paste error in porting pmap_enter_pde() from amd64. To avoid another similar error, I think that a followup change should remove the wired page count updates from the page table page deallocation code so that we would correctly pass "true" here. That would also reduce the number of atomic ops performed.

arm64/arm64/pmap.c
3400

Note that v_wire_count is now a per-CPU counter, and not updated with atomics.

There is some overlap with D12725.

There is some overlap with D12725.

I don't think that any of the code overlaps, but the notion of optimizing the TLB invalidation in pmap_remove_l2() was discussed.

arm64/arm64/pmap.c
5056

Is this comment correct? pmap_enter always sets ATTR_AF via ATTR_DEFAULT.

alc marked 2 inline comments as done.Jun 11 2019, 3:13 PM
alc added inline comments.
arm64/arm64/pmap.c
5056

No, the comment is incorrect. Thank you for pointing this out. I will update the patch accordingly.

As an aside, the functions that create "speculative" mappings, in other words, mappings not created as the direct result of an access, for example, pmap_enter_object() and pmap_enter_quick(), are also using ATTR_DEFAULT and thereby incorrectly marking the pages as referenced.

alc marked an inline comment as done.
This revision now requires review to proceed.Jun 11 2019, 3:46 PM

Andrew, as I understand the current state of the pmap code, ATTR_AF is always preset (and never actually cleared) because pmap_fault() does not yet have the ability to set ATTR_AF. Is that correct?

In D20585#445331, @alc wrote:

Andrew, as I understand the current state of the pmap code, ATTR_AF is always preset (and never actually cleared) because pmap_fault() does not yet have the ability to set ATTR_AF. Is that correct?

Correct. It's never made it's way up to the top of things I need to look at.

This revision was not accepted when it landed; it landed in state Needs Review.Jun 12 2019, 8:39 PM
This revision was automatically updated to reflect the committed changes.