Page MenuHomeFreeBSD

Implement per-CPU pmap activation tracking.
ClosedPublic

Authored by markj on Jan 17 2019, 5:00 PM.
Tags
None
Referenced Files
Unknown Object (File)
Feb 24 2024, 2:13 PM
Unknown Object (File)
Feb 3 2024, 9:40 PM
Unknown Object (File)
Jan 29 2024, 10:25 PM
Unknown Object (File)
Dec 20 2023, 2:07 AM
Unknown Object (File)
Dec 14 2023, 7:08 PM
Unknown Object (File)
Dec 11 2023, 7:03 AM
Unknown Object (File)
Dec 8 2023, 11:35 PM
Unknown Object (File)
Nov 29 2023, 2:04 PM
Subscribers

Details

Summary

The intent is to reduce the overhead of TLB shootdowns by ensuring that
we don't interrupt CPUs that are not using the given pmap. Tracking is
performed in pmap_activate(), which gets called during context switches:
from cpu_throw(), if a thread is exiting or an AP is starting, or
cpu_switch().

Note that there are two pmaps that are special in this context:
kernel_pmap and process 0's pmap. The former should never be activated,
and the latter is used for kernel threads.

For now pmap_sync_icache() still must interrupt all CPUs rather than the
active CPUs for the given pmap.

Diff Detail

Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 22444
Build 21599: arc lint + arc unit

Event Timeline

sys/riscv/riscv/pmap.c
740

Suppose that CPU n is not recorded in pm_active when mask is assigned, but later it activates the pmap. What would ensure that CPU n observes page table updates made before pmap_invalidate_page() call ?

Before the patch, when fence() was performed unconditionally, we could state that this fence would sync/w a fence necessary performed in the course of the context switch.

4277

When pmap_activate() is called from the context switch code, critical_enter/exit only waste cpu cycles.

4293

Why do you need l1 phys address in pcb ?

markj marked 3 inline comments as done.
  • Address kib's comments.
This revision is now accepted and ready to land.Jan 18 2019, 7:27 PM
sys/riscv/riscv/pmap.c
4293–4294

I know @kib suggested removing the L1 PA from the PCB, but what about storing it in the pmap to avoid calls to vtophys() during cpu_switch?

For that matter, you could just store the full %satp value to save a few more instructions so you would end up just doing 'load_satp(pmap->pm_satp)'.

4296–4301

Technically the privilege spec (1.10) says to invoke sfence.vma() prior to loading a new SATP value:

Note that writing satp does not imply any ordering constraints between page-table updates and subsequent address translations. If the new address space’s page tables have been modified, it may be necessary to execute an SFENCE.VMA instruction (see Section 4.2.1) prior to writing satp.

Though I think we are probably fine to invalidate afterward?

Also, I don't think we want the full 'pmap_invalidate_all' as that is going to maybe do an IPI. I think we just want to do a local invalidation solely on the current hart here via sfence_vma() which would match what the old code in assembly was doing. I think that is always true for all calls to this function as it isn't modifying the page tables, just modifying which page tables the current CPU is using.

Fix critical section handling.

This revision now requires review to proceed.Jan 28 2019, 5:12 PM
markj marked 2 inline comments as done.

Address jhb's comments.

This revision is now accepted and ready to land.Feb 11 2019, 10:29 PM
This revision was automatically updated to reflect the committed changes.