- User Since
- Jun 22 2015, 5:21 PM (258 w, 1 d)
Wed, May 27
Tue, May 26
Wed, May 20
Tue, May 12
Mon, May 11
Thu, May 7
Wed, May 6
May 1 2020
Apr 30 2020
Apr 28 2020
Apr 27 2020
Looked mostly at ktls
I'm going to let Navdeep review this..
Apr 14 2020
Apr 13 2020
Just a heads up that i plan to commit this tomorrow.
Apr 9 2020
Fix issues encountered when running mlx5 with hw tls enabled with this patchset:
- accessing ext_pgs via pointer was missed at compile time, as the driver was using the ext_buf rather than ext_pgs pointer
- the driver was using ext_pgs at the same time as pkthdr
- Fix accounting issue that jhb pointed out in lagg_bcast_start()
Apr 8 2020
Make cosmetic/style fixes as suggested by jhb
Update to reflect jhb's feedback. We can stop checking the ret value of lagg_enqueue() inside the loop entirely, as it will be overwritten the next time it is used outside the loop, so this simplifies things as well.
Apr 7 2020
Apr 6 2020
- Rebase past r359474
- Move m_ext_pgs down below m_pkthdr in the union, as suggested by Hans
Apr 3 2020
Apr 1 2020
Picture a case where you're asked to refill 128 slots, and fail after 127. Don't you want to at least flush the 127 that you were able to allocate?
Mar 30 2020
Comment changes suggested by jhb
- switch from bcopy to memcpy as suggested by Hans
- Remove ifdef KERN_TLS from locals' declarations and move those into the ifdef'ed block as suggested by scottl
- Add comment regarding where n comes from as suggested by scottl
After discussion with Hans, it turns out that it is the record type, not the seqno, that the Mellanox driver needs in the trailer for hwtls.
- Restored static asserts on the size of m_ext() that I'd forgotten I'd commented
- Changed copy of seqno to a memcpy
Mar 29 2020
Mar 28 2020
Mar 27 2020
Mar 16 2020
Mar 9 2020
Committed as r358808. (however I forgot to tag the review in the commit message)
Mar 7 2020
Mar 3 2020
I get it, but that (totally arbitrary) limit is a pet peeve of mine. Its one of those things that you follow up several levels of code to an "XXX" comment and an arbitrary value. Its like reading a mystery novel and finding the butler really did do it.
Funny how the comment right above the assert calls it out as bogus..
Thank you for this cleanup.
Why not just remove the limit, rather than making things even more complex?
Feb 27 2020
Feb 24 2020
Feb 23 2020
Feb 21 2020
Feb 20 2020
Abandoning this in favor of moving selection of an uncongested queue for lacp into the mlx5 driver.
Feb 19 2020
I'm 1/2 joking, but what would you think about not supporting extension headers at all? They are the worst part of IPv6 and make everything complicated and add lots of hairy cases. What benefit are they?
(I'm legitimately curious)
Feb 18 2020
I'd prefer the check be moved inside _iflib_assert(), but that's just a nit.
Feb 13 2020
Feb 5 2020
What we really care about is not re-ordering the load so that we ensure it happens prior to the loop. So I think you want an atomic with acquire semantics:
"the operation must have completed before any subsequent load or store (by program order) is performed"
Feb 3 2020
To expand on this, the issue is that interrupt handlers are removed via intr_event_execute_handlers() when IH_DEAD is set. The thread removing the intr is woken up, and he calls intr_event_update(). When this happens, the ie_hflags are cleared and re-built from all the remaining handlers sharing the event. When the last IH_NET handler is removed, the IH_NET flag will be cleared from ih_hflags (or ie_hflags may still be being rebuilt in a different context), and the ithread_execute_handlers() may return with ie_hflags missing IH_NET. So we can end up in a scenario where IH_NET was present before calling ithread_execute_handlers, and is not present at its return, meaning we must cache the need for epoch locally.
Feb 1 2020
Jan 27 2020
Jan 24 2020
I like the idea of using a different ether_input() for epoch and non-epoch safe drivers to get the tests out of the critical path. However, this is a minor optimization at best, as branch predictors are really good these days on modern CPUs.
Jan 22 2020
As we discussed in slack, I'm not a huge fan of fixing things with a callout. If the hardware was amenable, I'd much rather leave the ring partially stocked and drop new packets until we were able to allocate mbufs again. However, based on the findings you reported with igb (it wanting a full rx ring to generate interrupts), I'm afraid that might require too much work from hardware drivers. And I'd prefer a fix to a real problem, even if its not something I personally find appealing, to leaving a real bug unfixed and having machines become unreachable.