Page MenuHomeFreeBSD

Don't fail changing props for unmapped DMAP memory

Authored by andrew on Dec 15 2021, 11:04 AM.



When recursing in pmap_change_props_locked we may fail because there is
no pte. This shouldn't be considered a fail as it may happen in a few
cases, e.g. there are multiple normal memory ranges with device memory
between them.

Diff Detail

rG FreeBSD src repository
Automatic diff as part of commit; lint not applicable.
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

Assuming this is the patch I saw yesterday, it unbreaks the ENA driver..

It's an updated version, so might break it again (but hopefully not). It will now skip over unmapped memory rather than return early.

Just tested the latest version; works fine.

This revision is now accepted and ready to land.Dec 15 2021, 3:45 PM

The last sentence of this comment isn't accurate now.


There is a very similar function in arm64/iommu/iommu_pmap.c that still has the old behaviour w.r.t. setting *level when a PTE is missing. I think it would be better to keep them consistent.


The assertion lvl == 3 in pmap_qremove() doesn't catch some erroneous cases now.


I think I was confused and the code was correct, however it should start with pmap_l0 so we can skip over the l0 entry if it's unmapped.


Should be fixed in D33509


I also think that the code was correct, and that that the proposed changes to pmap_pte() should be undone.

This revision now requires review to proceed.Dec 20 2021, 10:12 AM
This revision is now accepted and ready to land.Dec 21 2021, 4:58 AM

So for the case of setting memory as uncacheable (when wbinv_range() is called), do you end up calling this function on unmapped range? Does it work on arm64?

We only ever call wbinv_range when ptep != NULL so a mapping will exist for the current virtual address. If the DMAP is unmapped the caller will perform the cache management. If the DMAP is mapped cpu_dcache_wbinv_range will be called twice, once for the non-DMAP, and once for the DMAP memory.