Page MenuHomeFreeBSD

Map arm64 pci config memory as non-posted
ClosedPublic

Authored by andrew on May 2 2021, 12:34 PM.
Tags
None
Referenced Files
F83709124: D30079.id.diff
Mon, May 13, 8:23 PM
F83709109: D30079.id118359.diff
Mon, May 13, 8:23 PM
F83707815: D30079.id103069.diff
Mon, May 13, 7:59 PM
F83707516: D30079.id103045.diff
Mon, May 13, 7:53 PM
F83706967: D30079.diff
Mon, May 13, 7:44 PM
Unknown Object (File)
Sun, May 12, 8:00 AM
Unknown Object (File)
Wed, May 8, 5:49 AM
Unknown Object (File)
Fri, May 3, 2:46 AM
Subscribers

Details

Summary

On arm64 PCI config memory is expected to be mapped with a non-posted
device type. To handle this use the new bus_map_resource support in
arm64 to map memory with the new VM_MEMATTR_DEVICE_NP attribute. This
memory has already been allocated and activated, it just needs to be
mapped.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

Maybe it would be simpler and more clear to have a new memory type for struct resource for PCI config region that all the bus_* routines will know ?

This seemed to be the consensus when I asked how best to do it on IRC.

Are there SYS_RES_MEMORY regions for which you don't want VM_MEMATTR_DEVICE_NP?

In D30079#678549, @jhb wrote:

Are there SYS_RES_MEMORY regions for which you don't want VM_MEMATTR_DEVICE_NP?

All non-PCI memory should use the faster nGnRE memory, however the ordering isn't strong enough for PCI config space. I would like to change VM_MEMATTR_DEVICE to use nGnRE mappings, however will need to ensure PCIe still works.

Also on M1 the memory types are backwards so PCI memory will need to be mapped with nGnRE while other MMIO memory will be nGnRnE.

There is also a tegra pcie driver that is common to arm and arm64 , so we should define VM_MEMATTR_DEVICE_NP for arm as well. But I'm not sure if we have the rest of the necessary infrastructure implemented on arm.

Map the Tegra PCIe config memory as non-posted

In D30079#777484, @mmel wrote:

There is also a tegra pcie driver that is common to arm and arm64 , so we should define VM_MEMATTR_DEVICE_NP for arm as well. But I'm not sure if we have the rest of the necessary infrastructure implemented on arm.

We use bus_space_map directly in the Tegra driver, so can pass the appropriate flag to it.

This revision is now accepted and ready to land.Feb 21 2022, 5:11 PM

I had to do this kind of nasty feeling patch to get devices right on M1: https://people.freebsd.org/~kevans/m1/pci-mapping.diff -- I don't know if there's a better way to have done that, or if a custom bus_activate_resource that handles the mapping is really just the way to go.

Does the M1 work if you have this patch, D34333, and the following change?

diff --git a/sys/arm64/include/vm.h b/sys/arm64/include/vm.h
index e479aab52e26..9cdd92b7284e 100644
--- a/sys/arm64/include/vm.h
+++ b/sys/arm64/include/vm.h
@@ -40,7 +40,7 @@
  * VM_MEMATTR_DEVICE can be changed to VM_MEMATTR_DEVICE_nGnRE when
  * the PCI drivers use VM_MEMATTR_DEVICE_NP for their config space.
  */
-#define        VM_MEMATTR_DEVICE               VM_MEMATTR_DEVICE_nGnRnE
+#define        VM_MEMATTR_DEVICE               VM_MEMATTR_DEVICE_nGnRE
 #define        VM_MEMATTR_DEVICE_NP            VM_MEMATTR_DEVICE_nGnRnE

 #ifdef _KERNEL

Does the M1 work if you have this patch, D34333, and the following change?

Negative, ish... I note that it triggers an overflow in smmu.c that I just hacked around, but with this I don't even get to COPYRIGHT. I'll dig into why a bit later.

This revision was automatically updated to reflect the committed changes.