Page MenuHomeFreeBSD

bhyve: Add Integrated Endpoint to PCIe Capability
ClosedPublic

Authored by chuck on Apr 14 2019, 1:24 AM.
Tags
None
Referenced Files
Unknown Object (File)
Fri, Dec 6, 5:05 AM
Unknown Object (File)
Oct 12 2024, 4:05 PM
Unknown Object (File)
Oct 3 2024, 2:33 PM
Unknown Object (File)
Oct 3 2024, 8:45 AM
Unknown Object (File)
Oct 3 2024, 12:14 AM
Unknown Object (File)
Oct 1 2024, 5:57 PM
Unknown Object (File)
Oct 1 2024, 4:50 PM
Unknown Object (File)
Oct 1 2024, 10:18 AM

Details

Summary

The NVMe CAM driver reports the PCIe Link Capability and Status for devices. For emulated bhyve NVMe devices, this looks like:

nda0: nvme version 1.3 x63 (max x63) lanes PCIe Gen15 (max Gen15) link

The driver outputs this because the emulated device doesn't include the PCIe Capability structure. The NVMe specification requires these registers, so the fix is to add this set of capability registers to the emulated device.

Note that PCI Express devices that are integrated into the Root Complex (i.e. Bus 0x0) do not have to support the Link Capability or Status registers. Windows will fail to start (i.e. Code 10) devices that appear to be part of the Root Complex but report being a PCI Express Endpoint. So also add a check to pci_emul_add_pciecap() to check if the device is integrated and change the device type.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

Looks good, other than a lot of extra blank lines.

I suspect almost all of the other pci-e devices in bhyve are also actually endpoints and not ports. Note that right now there are no PCI bridges in boyce's topology and everything is on PCI bus zero.

Hmm, so it seems that the host bridge driver claims to be a root port. That seems really odd. The hostb0 device on my ivy bridge desktop machine doesn't have a PCI-e capability at all:

hostb0@pci0:0:0:0:      class=0x060000 card=0x77511462 chip=0x01008086 rev=0x09 hdr=0x00
    cap 09[e0] = vendor (length 12) Intel cap 0 version 1

In general I think your change is probably right, but I would probably omit the warning since it will just be spam, and I would trim some of the blank lines as Rod suggested. I might also reword the comment to be something like:

/*
  * Use the integrated endpoint type for endpoints on a root complex bus.
  *
  * NB: bhyve currently only supports a single PCI bus that is the root
  * complex bus, so all endpoints are integrated.
  */

Also, I think I know why hostb0 has this bogus PCI-e cap, it's to deal with the MSI blacklisting logic in FreeBSD. It probably should be an integrated endpoint and not a root port IMO.

virtio devices probably shouldn't have PCI-e caps, but other device models like AHCI probably should. We should check the virtio spec to see if the virtio ones should have a pci-express one or not.

Update to incorporate feedback from Rod and John

From my reading of the virtio specification, virtio devices must conform to the bus spec they are emulating. So if our virtio devices don't say PCIe, they shouldn't need the PCIe CAP.

The AHCI spec is similar:

Though not mentioned in this specification, other capability pointers may be necessary, depending upon the implementation space. Examples would be the PCI-X capability for PCI-X implementations, PCI-Express capability for PCI-Express implementations, and potentially the vendor specific capability pointer.

So I wouldn't think the AHCI implementation would need the PCIe CAP either unless we wanted it to be PCIe.

Other devices may or may not need this, but I think we can handle them on an ad-hoc basis.

We should fix hostbridge devices to be endpoints as a followup, and we should sprinkle pci_emul_add_pciecap in more places (ahci and virtio).

This revision is now accepted and ready to land.May 23 2019, 4:50 PM
This revision was automatically updated to reflect the committed changes.