Page MenuHomeFreeBSD

amd64 pmap: avoid an unnecessary demotion
ClosedPublic

Authored by alc on Thu, Jul 10, 7:45 PM.
Tags
None
Referenced Files
Unknown Object (File)
Tue, Jul 22, 6:51 PM
Unknown Object (File)
Tue, Jul 22, 6:15 PM
Unknown Object (File)
Fri, Jul 18, 8:24 AM
Unknown Object (File)
Fri, Jul 18, 8:24 AM
Unknown Object (File)
Fri, Jul 18, 8:24 AM
Unknown Object (File)
Fri, Jul 18, 8:24 AM
Unknown Object (File)
Fri, Jul 18, 8:24 AM
Unknown Object (File)
Mon, Jul 14, 12:29 AM
Subscribers

Details

Summary

Sync the amd64 pmap with D51220. Primarily, avoid an unnecessary demotion.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

alc requested review of this revision.Thu, Jul 10, 7:45 PM
alc created this revision.
markj added inline comments.
sys/amd64/amd64/pmap.c
7552

Should we make this conditional on va < VM_MAXUSER_ADDRESS?

7560

Same here.

This revision is now accepted and ready to land.Fri, Jul 11, 1:23 PM
kib added inline comments.
sys/amd64/amd64/pmap.c
7552

I do not think so, pmap_alloc_pde() only returns non-NULL page table page for user mappings.

alc marked 3 inline comments as done.Sun, Jul 13, 6:34 PM
alc added inline comments.
sys/amd64/amd64/pmap.c
7552

Yes, pgpg is NULL for the kernel address space.

alc marked an inline comment as done.Sun, Jul 13, 8:07 PM
alc added inline comments.
sys/amd64/amd64/pmap.c
7606

In contrast to arm64, I am not clearing the PDE and issuing a TLB invalidation here for the kernel address space. The reason for that is derived from the following text from the AMD manual:

Use of Cached Entries When Reporting a Page Fault Exception. On current AMD64
processors, when any type of page fault exception is encountered by the MMU, any cached upper-
level entries that lead to the faulting entry are flushed (along with the TLB entry, if already cached) and
the table walk is repeated to confirm the page fault using the table entries in memory. This is done
because a table entry is allowed to be upgraded (by marking it as present, or by removing its write,
execute or supervisor restrictions) without explicitly maintaining TLB coherency. Such an upgrade
will be found when the table is re-walked, which resolves the fault. If the fault is confirmed on the re-
walk however, a page fault exception is reported, and upper level entries that may have been cached on
the re-walk are flushed.

At this point either the invlpgs within pmap_remove_ptes() or the above
pmap_invalidate_all() should have invalidated any page walk cache entry holding the soon-to-be overwritten PDE. However, if that PDE somehow gets cached again before it is rewritten below, and used to translate a virtual address, it will lead to an invalid PTE, and so the MMU will flush all of the cached upper-level entries and rewalk the page table.

sys/amd64/amd64/pmap.c
7606

I believe this is the arch behavior, guaranteed on both intel and amd, that invalid ptes are never cached, and cause tlb flush for the address.