Page MenuHomeFreeBSD

amd64 pmap: reduce chances that chunk mutex will be taken while pv list lock is held
AbandonedPublic

Authored by mjg on Oct 17 2019, 9:45 PM.
Tags
None
Referenced Files
Unknown Object (File)
Feb 2 2024, 10:49 PM
Unknown Object (File)
Jan 18 2024, 1:15 PM
Unknown Object (File)
Dec 29 2023, 5:23 AM
Unknown Object (File)
Dec 20 2023, 6:44 AM
Unknown Object (File)
Nov 6 2023, 5:00 AM
Unknown Object (File)
Sep 30 2023, 5:33 AM
Unknown Object (File)
Jul 15 2023, 3:29 AM
Unknown Object (File)
May 29 2023, 3:52 AM
Subscribers

Details

Reviewers
alc
kib
jeff
markj
Summary

Most notably calls around get_pv_entry relock immediately after return if necessary. We can unlock upfront if we would relock later. While here change semantics of this function to allow reclamation without the lock being passed.

I did not bother benchmarking this change, but I did verify that many times things do get unlocked.

Diff Detail

Lint
Lint Skipped
Unit
Tests Skipped
Build Status
Buildable 27092

Event Timeline

mjg retitled this revision from amd64 pmap: reduce chances that chunk mutex will be taken while pv list is held to amd64 pmap: reduce chances that chunk mutex will be taken while pv list lock is held.Oct 17 2019, 9:46 PM

I find it somewhat unnatural to do this optimization unlock in pmap_try_insert_pv_entry(), at least for the case of the call from pmap_copy(). There you can release the lock much earlier while iterating of the the src page table, if it does not match the needed lock for the next pte.

sys/amd64/amd64/pmap.c
361

Why do you need the ({}) extension there ?

4785

_NORECLAIM) != 0