Page MenuHomeFreeBSD

Implement pci_enable_msi() and pci_disable_msi() in the LinuxKPI.
ClosedPublic

Authored by hselasky on Jul 20 2019, 8:33 PM.
Tags
Referenced Files
Unknown Object (File)
Mon, Apr 8, 5:19 AM
Unknown Object (File)
Mon, Apr 8, 3:14 AM
Unknown Object (File)
Mon, Apr 8, 3:09 AM
Unknown Object (File)
Mon, Apr 8, 2:56 AM
Unknown Object (File)
Mon, Apr 8, 2:37 AM
Unknown Object (File)
Mon, Apr 8, 2:31 AM
Unknown Object (File)
Mon, Apr 8, 2:19 AM
Unknown Object (File)
Mon, Apr 8, 1:54 AM

Details

Summary

Right now, in drm-kmod, MSI is ifdef'd out. (Even though it was used in the old in-tree DRM port.)

Which doesn't seem to cause any problems on amd64, but legacy interrupts and ARM/aarch64 don't mix very well.

That is, on my MACCHIATObin with a Radeon RX 480, I've been seeing an interrupt rate of >150000 with just console and >300000 with GUI. 25% of a CPU core (!) was wasted on interrupts and desktop responsiveness suffered.

Enabling MSI in amdgpu produces a much more reasonable interrupt rate — I'm seeing 160 as I'm typing this :)


I've tried to use the dev.msix field but I couldn't figure out how the MSI-X assignment works, so I introduced a new flag. Suggestions for a simpler/better way are welcome of course.

also, this field is currently not initialized, where is the best place to do that?

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Skipped
Unit
Tests Skipped

Event Timeline

I'll have a closer look at you patch(es) when I'm back in office. Please stay tuned.

Update: added pdev->msi_enabled flag that's used by i915kms.

sys/compat/linuxkpi/common/include/linux/interrupt.h
58

I don't like adding this if() check. Can you integrate better with the existing code and rename dev->msix into dev->interrupt_start, so that this code cal fallback into the existing return (irq - dev->msix + 1) ??

sys/compat/linuxkpi/common/include/linux/pci.h
642

Can there be more than one msi IRQ and should we support that?

Maybe rename msix_max -> num_interrupts for use with both MSI and MSIX ?

sys/compat/linuxkpi/common/src/linux_pci.c
250

I think pdev is already allocated zeored, so you can skip this false setting?

Assume that the driver knows not to enable both MSIX and MSI.

sys/compat/linuxkpi/common/include/linux/interrupt.h
58

I just couldn't figure out how to make irq not equal dev->irq.. without setting irq and dev_irq in pci_enable_msi, they were equal anyway (IIRC to the undefined value -1) and the return (0) happened anyway

sys/compat/linuxkpi/common/include/linux/pci.h
642

This (deprecated) function (that GPU drivers use) can only enable one.

Otherwise, pretty much only pci_alloc_irq_vectors_affinity could allocate multiple MSI IRQs.

OK, I'll try to have a look at your patch.

sys/compat/linuxkpi/common/include/linux/interrupt.h
58

Does drivers using pci_enable_msi use dev->irq?

In pci_enable_msix() we don't set this field.

hselasky updated this revision to Diff 60494.
hselasky edited reviewers, added: val_packett.cool; removed: hselasky.
hselasky retitled this revision from linuxkpi: implement pci_enable_msi / pci_disable_msi to Implement pci_enable_msi() and pci_disable_msi() in the LinuxKPI..

Can you test this updated patch?

We need pdev->irq = rle->start still (and, for completeness, similarly in the disable function), otherwise it tries to request LINUX_IRQ_INVALID:

vgapci0: attempting to allocate 1 MSI vectors (1 supported)
vgapci0: using IRQ 12 for MSI
requesting irq 65535
requesting irq 65535 result -6

(requesting is my debug print)

For some reason the driver uses pdev->irq, not pdev->dev.irq. With that line added, it works.

(also, looks like you didn't rebase the patch on top of the recent powerpc changes? I've had to apply the chunk with pci_enable_msi manually)

Can you try this patch and report back?

This revision is now accepted and ready to land.Aug 7 2019, 1:57 AM