Page MenuHomeFreeBSD

Implement pci_enable_msi() and pci_disable_msi() in the LinuxKPI.

Authored by hselasky on Jul 20 2019, 8:33 PM.



Right now, in drm-kmod, MSI is ifdef'd out. (Even though it was used in the old in-tree DRM port.)

Which doesn't seem to cause any problems on amd64, but legacy interrupts and ARM/aarch64 don't mix very well.

That is, on my MACCHIATObin with a Radeon RX 480, I've been seeing an interrupt rate of >150000 with just console and >300000 with GUI. 25% of a CPU core (!) was wasted on interrupts and desktop responsiveness suffered.

Enabling MSI in amdgpu produces a much more reasonable interrupt rate — I'm seeing 160 as I'm typing this :)

I've tried to use the dev.msix field but I couldn't figure out how the MSI-X assignment works, so I introduced a new flag. Suggestions for a simpler/better way are welcome of course.

also, this field is currently not initialized, where is the best place to do that?

Diff Detail

rS FreeBSD src repository
Automatic diff as part of commit; lint not applicable.
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

I'll have a closer look at you patch(es) when I'm back in office. Please stay tuned.

Update: added pdev->msi_enabled flag that's used by i915kms.

hselasky added inline comments.Jul 29 2019, 2:28 PM
58 ↗(On Diff #60153)

I don't like adding this if() check. Can you integrate better with the existing code and rename dev->msix into dev->interrupt_start, so that this code cal fallback into the existing return (irq - dev->msix + 1) ??

642 ↗(On Diff #60153)

Can there be more than one msi IRQ and should we support that?

Maybe rename msix_max -> num_interrupts for use with both MSI and MSIX ?

250 ↗(On Diff #60153)

I think pdev is already allocated zeored, so you can skip this false setting?

Assume that the driver knows not to enable both MSIX and MSI.

58 ↗(On Diff #60153)

I just couldn't figure out how to make irq not equal dev->irq.. without setting irq and dev_irq in pci_enable_msi, they were equal anyway (IIRC to the undefined value -1) and the return (0) happened anyway

642 ↗(On Diff #60153)

This (deprecated) function (that GPU drivers use) can only enable one.

Otherwise, pretty much only pci_alloc_irq_vectors_affinity could allocate multiple MSI IRQs.

OK, I'll try to have a look at your patch.

hselasky added inline comments.Aug 6 2019, 8:12 AM
58 ↗(On Diff #60153)

Does drivers using pci_enable_msi use dev->irq?

In pci_enable_msix() we don't set this field.

hselasky commandeered this revision.Aug 6 2019, 8:40 AM
hselasky updated this revision to Diff 60494.
hselasky edited reviewers, added:; removed: hselasky.
hselasky retitled this revision from linuxkpi: implement pci_enable_msi / pci_disable_msi to Implement pci_enable_msi() and pci_disable_msi() in the LinuxKPI..

Can you test this updated patch?

hselasky updated this revision to Diff 60496.Aug 6 2019, 9:00 AM

Can you test this patch?

hselasky updated this revision to Diff 60497.Aug 6 2019, 9:04 AM

Please test.

We need pdev->irq = rle->start still (and, for completeness, similarly in the disable function), otherwise it tries to request LINUX_IRQ_INVALID:

vgapci0: attempting to allocate 1 MSI vectors (1 supported)
vgapci0: using IRQ 12 for MSI
requesting irq 65535
requesting irq 65535 result -6

(requesting is my debug print)

For some reason the driver uses pdev->irq, not pdev->dev.irq. With that line added, it works.

(also, looks like you didn't rebase the patch on top of the recent powerpc changes? I've had to apply the chunk with pci_enable_msi manually)

hselasky updated this revision to Diff 60504.Aug 6 2019, 1:44 PM

Can you try this patch and report back?

hselasky updated this revision to Diff 60505.Aug 6 2019, 1:50 PM

Please test!

This revision is now accepted and ready to land.Aug 7 2019, 1:57 AM