User Details
- User Since
- Nov 9 2020, 10:16 PM (233 w, 2 d)
Mar 11 2024
Verified in NetApp lab.
Dec 7 2021
Aug 3 2021
Jul 12 2021
Jul 1 2021
Jun 29 2021
Jun 25 2021
Jun 24 2021
I donot see "irdma" available yet in FreeBSD ?
Jun 18 2021
Jun 16 2021
I have applied the patch and rebooted my machine and I see "unqualified" message for a good cable.
The good thing is that with this patch I donot see "unqualified" message for a admin link-down.
Jun 15 2021
Thanks Krzysztof , will hold-onto this review and wait on you for further direction.
Jun 14 2021
So, thinking on this, my guess is that when you reboot the machine, we would be finding an "unqualified" for a qualified cable because FW see this as link down. Also, my guess, the cable will show up as unqualified when you shut the link on link-partner.
Thanks Krzysztof.
Jun 13 2021
Jun 11 2021
Also, in original change, https://reviews.freebsd.org/D28028, I noticed that after executing ixl_set_link(pf, false), the PHY capabilities query for an_info has I40E_AQ_QUALIFIED_MODULE unset. So, the same supported/qualified module becomes unqualified.
I think the crux of the problem is with ixl_set_link() unsetting I40E_AQ_QUALIFIED_MODULE.
May 10 2021
Also, I prefer to have a quick call and discuss the ideas & thoughts we have. We would need an expert from Intel to help us understand AIM.
On side note, from NetApp performance experiments on NetApp platforms, BSD11 (legacy driver) vs BSD12 (IFLIB base driver) - we noticed almost 3.5x-4x latency spike in one of write tests for IFLIB-based drivers.
I have similar observation (bad news) wrt UDP. But for TCP, I see just fine. My runs are all on NetApp platform.
Please note: my client is not HoL.
May 7 2021
May 6 2021
Thanks Kevin. Your observations seem to fall inline with my code understanding. I have been digging into this to understand performance impact, NetApp is having after migrating to IFLIB based drivers. Here are my observations so far (mainly with respect to Intel 10G/40G drivers):
May 5 2021
Thinking more on this, lets cancel/ignore this change. In other words, keep the default setting to FALSE. Anyone who wants AIM, can enable on their platforms.
Summarizing, this AIM will basically decide the value of ITR. Without AIM, ITR value will always stay at 128. With AIM enabled, ITR value now juggles between 32 - 8192. Below is the histogram on NetApp platform (12 processors, Intel Xeon D-1557 @1.50GHZ) when I had iperf3 ran for 1min.
I donot know exactly how ITR value translates to Interrupt moderation but to best of my knowledge, this value determines a minimum gap between two consecutive interrupts. From below histogram you can see, that, with more frequent data, ITR value concentrates more around 32 and 128.
That also implies, the gap between interrupts is lower when compared to AIM disabled i.e, 128. This falls inline with your observation too.
May 3 2021
Feb 7 2021
Review https://reviews.freebsd.org/D27465 helps to improve latency overall and has no dependency on this revision.
Jan 27 2021
- Ensure to release MSIX after freeing IRQ resources
Dec 22 2020
Dec 21 2020
- Add IFDI_QUEUES_FREE() to iflib_pseudo_deregister()
Dec 16 2020
Dec 9 2020
Addressed Mark J comments.
- Move AIM functionality into its own function to keep interrupt handler code simple
Dec 3 2020
This change with re-arm IRQ i.e, rx_wait_irq = 1, also helped NetApp to improve latency in 64K writes.
Nov 24 2020
rx bitmap do get free in following function call. Remove explicit free of bitmap yet again.
Sure, thanks Kevin. I will discuss with Intel engineers.
Nov 23 2020
At NetApp, in one of our proprietary application 64K sequential write load test, we noticed that average LRO size of the packet/segment gets shorter. And we also observed, average LRO segment size is proportional to interrupt rate and throttling. For better running of our application, we need average LRO segment size to be on higher size. With AIM enabled (and with other Intel patch), the average LRO size now come close to BSD11 LRO average segment size for same application.