Page MenuHomeFreeBSD

Have pmap_enter() reuse the PV entry when updating a mapping's PA.
ClosedPublic

Authored by markj on Jun 25 2018, 10:12 AM.
Tags
None
Referenced Files
Unknown Object (File)
Jan 12 2024, 5:55 AM
Unknown Object (File)
Dec 20 2023, 2:18 AM
Unknown Object (File)
Dec 19 2023, 5:03 AM
Unknown Object (File)
Dec 15 2023, 6:47 AM
Unknown Object (File)
Nov 10 2023, 4:02 PM
Unknown Object (File)
Nov 8 2023, 10:12 AM
Unknown Object (File)
Nov 7 2023, 11:28 AM
Unknown Object (File)
Nov 6 2023, 10:15 AM
Subscribers

Details

Summary

As discussed in D15910, with this change pmap_enter() temporarily
invalidates the old mapping so that its PV entry can be reused. This
sidesteps the bug reported there, where the PV entry allocation causes
the old PV entry to be reclaimed, leading to a use-after-free of the
page table page.

This addresses the same bug as r329254.

Test Plan

Just verified that we boot with this change. Peter is working on validating
the change.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

markj added reviewers: alc, kib.

I wonder if it would be worth having a #define which allows platforms to opt out of the fix in r329254.

kib added inline comments.
sys/amd64/amd64/pmap.c
4880 ↗(On Diff #44412)

You probably should assert that opa == origpte & PG_FRAME, same as in edited validate: code below.

This revision is now accepted and ready to land.Jun 25 2018, 11:40 AM

At first glance, this looks good. I'll give it a careful reading in the next 24 hours.

markj marked an inline comment as done.
  • Add assertion suggested by kib.
This revision now requires review to proceed.Jun 26 2018, 9:29 AM
sys/amd64/amd64/pmap.c
4876–4877 ↗(On Diff #44455)

I would suggest ending the sentence at the word "mapping" and give two reasons for the temporary invalidation as subsequent sentences. In particular, I think that we want a sentence (or two) here that describes the COW bug. Otherwise, someone might later decide that this code can be "improved" by avoiding the temporarily invalid mapping.

4880 ↗(On Diff #44455)

You can use pte_load_clear(pte) here.

markj marked 2 inline comments as done.
  • Address alc's feedback: use pte_load_clear(), and explain both reasons for the invalidation.
sys/amd64/amd64/pmap.c
4888 ↗(On Diff #44455)

Is there any reason to perform this check or the next one under the PV list lock?

sys/amd64/amd64/pmap.c
4888 ↗(On Diff #44455)

Not that I recall.

sys/amd64/amd64/pmap.c
4888 ↗(On Diff #44455)

And to be clear, we routinely perform these operations before the PV list lock is acquired, and that was intentional, not accidental. Recall that the MI layer tests for dirtiness by checking m->dirty, calling pmap_is_modified() (or similar), and rechecking m->dirty. I assert that the lock on the current pmap is going to ensure tha we do not miss the dirtiness of a page as that dirtiness transitions from the PTE to the vm_page.

Maybe this should be written in a comment somewhere.

  • Lift vm_page operations out of the PV list lock scope.
  • Note that the pmap lock is sufficient for update dirty and reference bits.
sys/amd64/amd64/pmap.c
4888 ↗(On Diff #44455)

Either way, I think the handling of PGA_REFERENCED is a bit racy. We don't hold the page lock here, and the page daemon tests PGA_REFERENCED before calling pmap_ts_referenced(), so if we clear PG_A to set PGA_REFERENCED in between, the page daemon won't see the reference and may free the page prematurely.

sys/amd64/amd64/pmap.c
4888 ↗(On Diff #44455)

Never mind, of course there's nothing preventing a premature free if a reference occurs after the pmap_ts_referenced() call.

This revision is now accepted and ready to land.Jun 28 2018, 6:13 AM
This revision was automatically updated to reflect the committed changes.

Has anyone begun adapting this change to the i386 pmap?

In D16005#340128, @alc wrote:

Has anyone begun adapting this change to the i386 pmap?

I haven't but was planning to do so once I'm back from some traveling in about 2 weeks, if no one else has.

FWIW, though armv8's pmap_enter() does indeed invalidate the mapping during an update, it does not reuse the PV entry and thus has the same bug with reclamation as amd64 did.