Page MenuHomeFreeBSD

iommu_gas: Avoid double unmapping on error
ClosedPublic

Authored by alc on Jul 20 2022, 4:54 PM.
Tags
None
Referenced Files
Unknown Object (File)
Mar 16 2024, 7:34 AM
Unknown Object (File)
Mar 4 2024, 8:10 AM
Unknown Object (File)
Jan 14 2024, 10:24 AM
Unknown Object (File)
Jan 11 2024, 5:25 PM
Unknown Object (File)
Dec 20 2023, 6:06 AM
Unknown Object (File)
Jul 11 2023, 2:06 PM
Unknown Object (File)
Jul 11 2023, 2:03 PM
Unknown Object (File)
Jul 11 2023, 1:57 PM
Subscribers

Details

Summary

In the extremely unlikely case that the iommu_gas_map_region() call in bus_dma_iommu_load_ident() failed, we would attempt to unmap the failed entry twice. First in iommu_gas_map_region(), and a second time in the caller. Once is enough, and twice is problematic because it leads to a second RB_REMOVE call on the same tree node. Like it or not, RB_TREE does not handle that possibility.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

alc requested review of this revision.Jul 20 2022, 4:54 PM
alc created this revision.
This revision is now accepted and ready to land.Jul 20 2022, 4:57 PM

Kostik, I have another related concerned. What guarantee do we have that dmar_qi_task() is actually finished with the entry when it is freed here or in domain_init_rmrr()?

This revision was automatically updated to reflect the committed changes.
In D35869#814447, @alc wrote:

Kostik, I have another related concerned. What guarantee do we have that dmar_qi_task() is actually finished with the entry when it is freed here or in domain_init_rmrr()?

You mean, that the entry range must still be flushed from the TLB and the entry waited by the qi mechanism? It probably should be done for correctness.

But at least for RMRR entries, I believe that all of them are instantiated before DMAR translation is enabled, or at least they should be. This is probably more important than flushing.

In D35869#814571, @kib wrote:
In D35869#814447, @alc wrote:

Kostik, I have another related concerned. What guarantee do we have that dmar_qi_task() is actually finished with the entry when it is freed here or in domain_init_rmrr()?

You mean, that the entry range must still be flushed from the TLB and the entry waited by the qi mechanism? It probably should be done for correctness.

The iommu_gas_free_entry() call here could be performed while the entry is still in the tlb_flush_entries TAILQ. To prevent this from happening, when false is passed as the free argument to iommu_domain_unload_entry(), I think that the IOTLB invalidation should be performed synchronously. Then, we would also no longer need the IOMMU_MAP_ENTRY_QI_NF flag.

But at least for RMRR entries, I believe that all of them are instantiated before DMAR translation is enabled, or at least they should be. This is probably more important than flushing.

Concretely, here is what I'm suggesting:

diff --git a/sys/dev/iommu/iommu_gas.h b/sys/dev/iommu/iommu_gas.h
index c32a098538b0..a9d0df5f272f 100644
--- a/sys/dev/iommu/iommu_gas.h
+++ b/sys/dev/iommu/iommu_gas.h
@@ -43,17 +43,16 @@
 #define        IOMMU_PGF_ALLOC         0x0004
 #define        IOMMU_PGF_NOALLOC       0x0008
 #define        IOMMU_PGF_OBJL          0x0010
 
 #define        IOMMU_MAP_ENTRY_PLACE   0x0001  /* Fake entry */
 #define        IOMMU_MAP_ENTRY_RMRR    0x0002  /* Permanent, not linked by
                                           dmamap_link */
 #define        IOMMU_MAP_ENTRY_MAP     0x0004  /* Busdma created, linked by
                                           dmamap_link */
 #define        IOMMU_MAP_ENTRY_UNMAPPED        0x0010  /* No backing pages */
-#define        IOMMU_MAP_ENTRY_QI_NF   0x0020  /* qi task, do not free entry */
 #define        IOMMU_MAP_ENTRY_READ    0x1000  /* Read permitted */
 #define        IOMMU_MAP_ENTRY_WRITE   0x2000  /* Write permitted */
 #define        IOMMU_MAP_ENTRY_SNOOP   0x4000  /* Snoop */
 #define        IOMMU_MAP_ENTRY_TM      0x8000  /* Transient */
 
 #endif /* !_DEV_IOMMU_IOMMU_GAS_H_ */
diff --git a/sys/x86/iommu/intel_ctx.c b/sys/x86/iommu/intel_ctx.c
index bfc607674b57..c6f1effe7b6b 100644
--- a/sys/x86/iommu/intel_ctx.c
+++ b/sys/x86/iommu/intel_ctx.c
@@ -868,27 +868,25 @@ dmar_domain_free_entry(struct iommu_map_entry *entry, bool free)
 }
 
 void
 iommu_domain_unload_entry(struct iommu_map_entry *entry, bool free)
 {
        struct dmar_domain *domain;
        struct dmar_unit *unit;
 
        domain = IODOM2DOM(entry->domain);
        unit = DOM2DMAR(domain);
-       if (unit->qi_enabled) {
+       if (unit->qi_enabled && free) {
                DMAR_LOCK(unit);
                dmar_qi_invalidate_locked(IODOM2DOM(entry->domain),
                    entry->start, entry->end - entry->start, &entry->gseq,
                    true);
-               if (!free)
-                       entry->flags |= IOMMU_MAP_ENTRY_QI_NF;
                TAILQ_INSERT_TAIL(&unit->tlb_flush_entries, entry, dmamap_link);
                DMAR_UNLOCK(unit);
        } else {
                domain_flush_iotlb_sync(IODOM2DOM(entry->domain),
                    entry->start, entry->end - entry->start);
                dmar_domain_free_entry(entry, free);
        }
 }
 
 static bool
diff --git a/sys/x86/iommu/intel_qi.c b/sys/x86/iommu/intel_qi.c
index ca58715a227c..57ad59eaa91a 100644
--- a/sys/x86/iommu/intel_qi.c
+++ b/sys/x86/iommu/intel_qi.c
@@ -355,22 +355,21 @@ dmar_qi_task(void *arg, int pending __unused)
 
        DMAR_LOCK(unit);
        for (;;) {
                entry = TAILQ_FIRST(&unit->tlb_flush_entries);
                if (entry == NULL)
                        break;
                if (!dmar_qi_seq_processed(unit, &entry->gseq))
                        break;
                TAILQ_REMOVE(&unit->tlb_flush_entries, entry, dmamap_link);
                DMAR_UNLOCK(unit);
-               dmar_domain_free_entry(entry, (entry->flags &
-                   IOMMU_MAP_ENTRY_QI_NF) == 0);
+               dmar_domain_free_entry(entry, true);
                DMAR_LOCK(unit);
        }
        if (unit->inv_seq_waiters > 0)
                wakeup(&unit->inv_seq_waiters);
        DMAR_UNLOCK(unit);
 }
 
 int
 dmar_init_qi(struct dmar_unit *unit)
 {