Changeset View
Changeset View
Standalone View
Standalone View
sys/arm64/include/pmap.h
Show First 20 Lines • Show All 156 Lines • ▼ Show 20 Lines | |||||
#define ASID_RESERVED_FOR_PID_0 0 | #define ASID_RESERVED_FOR_PID_0 0 | ||||
#define ASID_RESERVED_FOR_EFI 1 | #define ASID_RESERVED_FOR_EFI 1 | ||||
#define ASID_FIRST_AVAILABLE (ASID_RESERVED_FOR_EFI + 1) | #define ASID_FIRST_AVAILABLE (ASID_RESERVED_FOR_EFI + 1) | ||||
#define ASID_TO_OPERAND(asid) ({ \ | #define ASID_TO_OPERAND(asid) ({ \ | ||||
KASSERT((asid) != -1, ("invalid ASID")); \ | KASSERT((asid) != -1, ("invalid ASID")); \ | ||||
(uint64_t)(asid) << TTBR_ASID_SHIFT; \ | (uint64_t)(asid) << TTBR_ASID_SHIFT; \ | ||||
}) | }) | ||||
#define PMAP_WANT_ACTIVE_CPUS_NAIVE | |||||
andrew: We could add `pm_active` to the pmap struct on arm64 at the cost of 2 atomic operations per… | |||||
Done Inline ActionsI do not see a point. This would make each context switch to pay two atomics for the benefit of rarely used API. It would be esp. saddle because armv8 indeed does not need to maintain pm_active for TLB invalidations due to availability of broadcast invl instructions. kib: I do not see a point. This would make each context switch to pay two atomics for the benefit… | |||||
Done Inline ActionsHaving looked at the other pmap implementations I noticed on powerpc pm_active is managed, but doesn't seem to be currently used (other than by this patch). andrew: Having looked at the other pmap implementations I noticed on powerpc `pm_active` is managed… | |||||
Done Inline ActionsWhich makes powerpc another candidate for the common (naive) implementation of pmap_active_cpus(). kib: Which makes powerpc another candidate for the common (naive) implementation of pmap_active_cpus… | |||||
extern vm_offset_t virtual_avail; | extern vm_offset_t virtual_avail; | ||||
extern vm_offset_t virtual_end; | extern vm_offset_t virtual_end; | ||||
/* | /* | ||||
* Macros to test if a mapping is mappable with an L1 Section mapping | * Macros to test if a mapping is mappable with an L1 Section mapping | ||||
* or an L2 Large Page mapping. | * or an L2 Large Page mapping. | ||||
*/ | */ | ||||
#define L1_MAPPABLE_P(va, pa, size) \ | #define L1_MAPPABLE_P(va, pa, size) \ | ||||
▲ Show 20 Lines • Show All 51 Lines • Show Last 20 Lines |
We could add pm_active to the pmap struct on arm64 at the cost of 2 atomic operations per-process switch and the extra used space in the struct. We may not want to MFC it if we consider pmap_t to be have stable size outside of pmap.c so would be better added later if so.