It seems long time ago, resource.h was designed to be used per
machine type to cover difference like presense IOPORTs on x86 and lack it on
other platforms. But as we see now, resource.h on all platforms contain
SYS_RES_IOPORT. But, for example, not all platform has an SYS_RES_GPIO.
Are that platform have no LEDs atached? :)
Details
- Reviewers
manu jhb kib • hselasky imp nwhitehorn
Diff Detail
- Repository
- rS FreeBSD src repository - subversion
- Lint
Lint Passed - Unit
No Test Coverage - Build Status
Buildable 26703 Build 25067: arc lint + arc unit
Event Timeline
I think it would be fair to remove copyrights from machine/resource.h, instead put the trivial header files into public domain or assign to yourself.
sys/sys/resource_types.h | ||
---|---|---|
46 | On x86 PCI_RES_BUS was 5, and this change definitely changes the KBI. I am not sure if the change is too important, since it seems to be mostly between dev/acpica and dev/pci, but it is simpler to keep PCI_RES_BUS at 5 instead of trying to see the consequences. |
o Do not copyright "empty files"
o Give old owner (i386) old number (PCI_RES_BUS to 5)
sys/sys/resource_types.h | ||
---|---|---|
46 | IMO, it is not much important, since out of sync kmod builds gives "kernel version mismatch". But to respect grandpa(i386), swap PCI and GPIO :) |
This looks fine to me, but I have no idea why John put PCI_RES_BUS under #ifdef NEW_PCIB instead of defining it always. Also, would it be reasonable to do a separate pass to rename PCI_RES_BUS to SYS_RES_PCIBUS ?
I will wait for @jhb input about that. But everything I found PCI_RES_BUS were checked together w/ NEW_PCIB. And I didn't found any place where defined PCI_RES_BUS can break anything.
Also, would it be reasonable to do a separate pass to rename PCI_RES_BUS to SYS_RES_PCIBUS ?
Sure, but that change will affect commercial development.
Many guys will get to know "who is ray?" :)
John said, he will audit it and then remove both #ifdef-s NEW_PCIB and PCI_RES_BUS if it possible.
My search shows that in some place only #ifdef PCI_RES_BUS checked. so it may unexpectedly change behavior.
So I left decision for that case to John.
Some places use whether or not PCI_RES_BUS is defined to decide if they should handle PCI bus numbers. BTW, I called it PCI_RES_BUS instead of SYS_RES_* because it isn't a bus-independent "system" resource (like memory or I/O ports or DMA channels), but instead is specific to a PCI bus (and in a multi-segment/domain system you would have duplicates even).
Adding new types should perhaps be a separate commit so that you can explain what they are. SYS_RES_PWR isn't quite obvious for example. In other parts of the tree and in other discussions regarding power we have talked about having a separate "plane" in the device space for power (so you have a power parent that is the device that gives you power, etc.). SYS_RES_IOMMU also doesn't seem very apparent at all as to what that will mean. We have I/O MMU support now via bus_dma on x86 and sparc64 (and alpha had it in the past), but those didn't require a 'struct resource', i.e. some kind of partitionable resource.
Maybe we have to make separate resource Id - SYS_RES_PCIBUS and PCI_RES_BUS define for those who want to "decide if they should handle PCI bus numbers"?
Resource IDs, just like their names must not be used to check of presence that resources. Most platforms have no IOPORTS, but SYS_RES_IOPORTS is here.
Adding new types should perhaps be a separate commit so that you can explain what they are.
No problem with it.
SYS_RES_PWR isn't quite obvious for example. In other parts of the tree and in other discussions regarding power we have talked about having a separate "plane" in the device space for power (so you have a power parent that is the device that gives you power, etc.).
I added it exactly for case you you put into parentheses. Lane number in power parent controller.
SYS_RES_IOMMU also doesn't seem very apparent at all as to what that will mean. We have I/O MMU support now via bus_dma on x86 and sparc64 (and alpha had it in the past), but those didn't require a 'struct resource', i.e. some kind of partitionable resource.
Quick look into bus_dma shows no any usage of special resource id, so IMO any resource type can be used. I added this to handle frequently used OFW/FDT property.
While working on a separate change (GitHub/bsdjhb/freebsd/tree/bus_map_resource_more) I remembered more of why PCI_RES_BUS is special. It's presence indicates to MI code (e.g. sys/dev/pci/pci_pci.c) that the MD host-PCI bridge drivers support managing bus numbers and so bus ranges should be allocated via bus_alloc_resource(). After the bus_map_resource_more branch lands, it will be possible to rework the one gross hack in pci_pci.c (where it calls bus_activate_resource to "map" child resources but asking the parent to do so) and then NEW_PCIB will be able to be turned on by default for all archs. However, the PCI_RES_BUS bits still need to be conditional to platforms that support PCI_RES_BUS. This is generally managed in the host-PCI bridge drivers rather than in the nexus. So in practice the '#ifdef NEW_PCIB' in the new header is in fact wrong. The real test is something more complex like this:
#if ((defined(__i386__) || defined(__amd64__)) && defined(NEW_PCIB) #define PCI_RES_BUS 5 #endif
Except you need more convoluted expressions to handle other architectures. For x86 the rules is that bus numbers are only supported in host-PCI bridge drivers if NEW_PCIB is defined. It seems that only riscv, powerpc, and sparc64 don't already define PCI_RES_BUS, so perhaps a negative architecture list is shorter than a positive one.
That is because PCI defines I/O port BARs, so those platforms fake up IO ports even if they don't have them to placate PCI.
SYS_RES_PWR isn't quite obvious for example. In other parts of the tree and in other discussions regarding power we have talked about having a separate "plane" in the device space for power (so you have a power parent that is the device that gives you power, etc.).
I added it exactly for case you you put into parentheses. Lane number in power parent controller.
That isn't what I said. I said a power parent, but if a power controller has multiple lanes as a general abstraction then this might be ok, though the name feels generic (ACPI's power consumer producer abstraction does not, it only has producers and consumers and the OS layer is required to maintain a reference count of active consumers and only turn off a power producer when the reference count drops to zero, but there is no concept of lanes or of a partitionable resource. In particular, in ACPI you cannot turn off part of a producer. If you had a power controller with independent rails, ACPI would model those as independent producers rather than partitionable lanes within a single producer). When I've talked with other folks about power management on ARM and PowerPC in the past, the request has been for an alternate set of parent <-> child relationships, so you can do 'device_get_power_power(dev)' to get a device's power producer, etc., not in terms of some partitionable resource managed via rman.
SYS_RES_IOMMU also doesn't seem very apparent at all as to what that will mean. We have I/O MMU support now via bus_dma on x86 and sparc64 (and alpha had it in the past), but those didn't require a 'struct resource', i.e. some kind of partitionable resource.
Quick look into bus_dma shows no any usage of special resource id, so IMO any resource type can be used. I added this to handle frequently used OFW/FDT property.
What does the OFW/FDT property actually describe? Is it some partitionable (divisible) range of numbers or is it a handle to an iommu controller?
While working on a separate change (GitHub/bsdjhb/freebsd/tree/bus_map_resource_more) I remembered more of why PCI_RES_BUS is special. It's presence indicates to MI code (e.g. sys/dev/pci/pci_pci.c) that the MD host-PCI bridge drivers support managing bus numbers and so bus ranges should be allocated via bus_alloc_resource(). After the bus_map_resource_more branch lands, it will be possible to rework the one gross hack in pci_pci.c (where it calls bus_activate_resource to "map" child resources but asking the parent to do so) and then NEW_PCIB will be able to be turned on by default for all archs. However, the PCI_RES_BUS bits still need to be conditional to platforms that support PCI_RES_BUS. This is generally managed in the host-PCI bridge drivers rather than in the nexus. So in practice the '#ifdef NEW_PCIB' in the new header is in fact wrong. The real test is something more complex like this:
#if ((defined(i386) || defined(amd64)) && defined(NEW_PCIB)
#define PCI_RES_BUS 5
#endifExcept you need more convoluted expressions to handle other architectures. For x86 the rules is that bus numbers are only supported in host-PCI bridge drivers if NEW_PCIB is defined. It seems that only riscv, powerpc, and sparc64 don't already define PCI_RES_BUS, so perhaps a negative architecture list is shorter than a positive one.
I found three cases:
- Platform defined both (riscv w/o PCI controller, but it's new enough to use NEW_PCIB :) )
- Platform having mix no-pci, pci-w/o-NEW_PCIB, pci-w/-NEW_PCIB.
- Platform having pci without PCI_RES_BUS
So I prefer to do it that way (w/o extra space):
#define SYS_RES_PCIBUS 5 #if defined(__powerpc__) || defined(__sparc64__) /* PCI_RES_BUS is not defined. */ #else #if defined(__arm__) || defined(__mips__) #ifdef NEW_PCIB #define PCI_RES_BUS SYS_RES_PCIBUS #endif #else /* amd64 || arm64 || i386 || riscv || etc. */ #define PCI_RES_BUS SYS_RES_PCIBUS #endif #endif
SYS_RES_PCIBUS is new and will not hurt any.
I understand why. But think we have to have way to fake not only PCI resources. Even more, i386/amd64 kernel may use FDT/OFW too. All platforms working with GPIOs. So we must not to wait for first usage of some res on platform to add Id for that resource.
SYS_RES_PWR isn't quite obvious for example. In other parts of the tree and in other discussions regarding power we have talked about having a separate "plane" in the device space for power (so you have a power parent that is the device that gives you power, etc.).
I added it exactly for case you you put into parentheses. Lane number in power parent controller.
That isn't what I said. I said a power parent, but if a power controller has multiple lanes as a general abstraction then this might be ok, though the name feels generic (ACPI's power consumer producer abstraction does not, it only has producers and consumers and the OS layer is required to maintain a reference count of active consumers and only turn off a power producer when the reference count drops to zero, but there is no concept of lanes or of a partitionable resource. In particular, in ACPI you cannot turn off part of a producer. If you had a power controller with independent rails, ACPI would model those as independent producers rather than partitionable lanes within a single producer). When I've talked with other folks about power management on ARM and PowerPC in the past, the request has been for an alternate set of parent <-> child relationships, so you can do 'device_get_power_power(dev)' to get a device's power producer, etc., not in terms of some partitionable resource managed via rman.
Both are used. link Some board use only parent, some specify lanes.
SYS_RES_IOMMU also doesn't seem very apparent at all as to what that will mean. We have I/O MMU support now via bus_dma on x86 and sparc64 (and alpha had it in the past), but those didn't require a 'struct resource', i.e. some kind of partitionable resource.
Quick look into bus_dma shows no any usage of special resource id, so IMO any resource type can be used. I added this to handle frequently used OFW/FDT property.
What does the OFW/FDT property actually describe? Is it some partitionable (divisible) range of numbers or is it a handle to an iommu controller?
Parent, child Id(DMA master Id in the doc) and sometimes DMA range. doc
@jhb, I see your look on that update from rman point of view - kind of "do we need rman for that res or not". I doing it for another thing, a bit. Let me describe it, terse:
Modern FDT enabled devices is like a Lego. "If you may need 26MHz and its divided clocks in 10 or more devices, then why you need to have dividers in each device? Just switch them."
So, if traditionally we have way to automatically get IRQ/MEM/IOPORT, then for such situation will be much batter to have automatic way to get Clocks/GPIOs/etc.
Can you, please, give me your motivation for adding new resource types when it’s clear that these resources cannot be handled by resman/newbus without extreme API breakage?
For OFW/FDT world, major issues are:
- given resource (request, allocated) cannot be described by single integer, and infrastructure must take it fully opaque (there is nothing like ‘resource start’, ‘resource end’ and interval enumerator)
- hierarchy for other resources(clk, regulator, reset, phy, ...) is not bus related and can have circular dependencies – so it cannot be represented by tree, but by graph. This result to fact that bus enumerator cannot prepare resource list for its child devices. This is something which break basic design expectation of newbus.
- circular resource dependencies (between types – frequent, within single resource type – no that frequent but exist mainly for clocks) also means, that we relay need staggered device_attach(), there is no way to do this by automatic ordering of devices attach, driven by resource availability.
Another current newbus or bus_space limitation is that we should have (general, globalized) memory attributes and be should pass it do all memory mapping function. On arm or arm64, PCI config space should be mapped with different attribute (strongly ordered) that pci (or other devices) mmio register space (posted writes are allowed). Actual behavior (all devices are mapped as strongly ordered) have measurable performance impact.
Don’t take me badly, I’m still newbus fan, I only don’t see reliable way how we can extend it without fatal/significant API breakage.