User Details
- User Since
- Jun 22 2015, 5:21 PM (496 w, 1 d)
Nov 25 2024
Nov 15 2024
Nov 14 2024
Super helpful review, John. I just opened a new review (https://reviews.freebsd.org/D47583) for the simplest suggested change. Will work on your other suggestions.
Nov 13 2024
Nov 12 2024
Nov 11 2024
Address Kib's feedbackj
Nov 8 2024
Nov 6 2024
Nov 4 2024
Oct 28 2024
Why do we want or need a hardcoded list? Why can't this function be more like lagg_capabilities()? If we do want a hardcoded list, what about IFCAP_TXTLS*
Oct 25 2024
I'd personally want to keep these messages with bootverbose.. I can imagine it might be handy to see them at times...
Oct 23 2024
Fix style issue pointed out by Mark
Oct 22 2024
Why is this re-surfacing?
Oct 16 2024
The a/b results were not surprising (boring as David likes to say). Just slightly higher CPU on the canary (due to the increased irq rate). But no clear streaming quality changes.
All in all, it seems to work and do no real harm, but we'll not use it due to the increased CPU
Yeah, my ideal irq rate/queue is < 1000 . We mostly use Chelsio and Mellanox NICs that can do super aggressive irq coalescing without freaking out TCP due to using RX timestamps. Super aggressive coalescing like this lets us build packet trains in excess of 1000 packets to feed to lro via RSS assisted LRO, and we actually have useful LRO on internet workloads with tens of thousands of TCP connections per-queue. That reminds me that I should port RSS assisted LRO to iflib (eg, lro_queue_mbuf()).
Oct 15 2024
Oct 14 2024
Oct 7 2024
Oct 1 2024
Sep 26 2024
Would it be better to call mb_unmapped_to_ext() here ?
Ah, OK, I understand now.
Sep 25 2024
I'm very afraid there will be performance implications due to new cache misses here from queing mbufs twice. On tens of thounsands of interfaces running over 8 years, we've never hit a deadlock from this lock, and I don't think fixing this is important enough to hurt performance for.
I'm confused.. if we are marking non-writable M_EXTPG mufs as M_RDONLY, why can't we simply remove the M_EXTPG check from M_WRITABLE? Why do we need a new macro?
Sep 9 2024
Sep 5 2024
This passes basic sanity testing at netflix. Sorry for the delayed approval; we had a few integration issues with this and a local Netflix feature that made it look like splice was not working. It only just now became obvious that it was due to our local feature & how to fix it.
Aug 16 2024
Aug 5 2024
Aug 4 2024
Jul 18 2024
Jul 15 2024
Is this safe? I think so, but I confess that I don't know the low level details in this driver very well.
Jul 8 2024
Jul 1 2024
Jun 21 2024
I was concerned at first about isal, but then I remembered that @jhb had moved it from plugging in at the ktls layer to plugging in at the ocf layer
May 31 2024
May 25 2024
May 24 2024
May 23 2024
May 17 2024
May 8 2024
May 5 2024
May 2 2024
May 1 2024
After this change, ktrace output is littered with 'CAP system call not allowed: $SYSCALL' on systems w/o capsicum enabled, which is confusing and distracting. Can this please be reverted to behave without CAP output for systems w/o capsicum ?
Apr 30 2024
Apr 29 2024
I consulted with @imp, and after a trip down the rabbit hole, we concluded that a header file consisting only of the definition of MAXPHYS is not creative (as this is the only way to express this in C) so it can't have copyright protection, and should simply be public domain.
Apr 28 2024
- Update diff to avoid cutting/pasting MAXPHYS definition as per @kib's suggestion
Apr 18 2024
I just tripped over this again when trying to use some of the 16K changes I have in my Netflix tree on a personal machine running a GENERIC kernel, so let's try this again in a different way.
Apr 15 2024
Apr 5 2024
Thank you for adding that option.
Apr 3 2024
Below are the results from my testing. I'm sorry that it took so long.. I had to re-do testing from the start b/c the new machine was not exactly identical to the old (different BIOS rev) and was giving slightly different results.
The results are from 92Gb/s of traffic over a one hour period with 45-47K TCP connections established/
Mar 29 2024
Mar 26 2024
Mar 25 2024
OK, starting with an unpatched kernel & working my way through the patches. I'll report percent busy for unpatched and various patches on our original 100G server (based around Xeon E5-2697A v4, which tends to be a poster-child for cache misses, as it runs very close to the limits of its memory bandwidth. I'll be disabling powerd and using TCP RACK TCP's DGP pacing.
Mar 23 2024
Mar 21 2024
Guys, this is crazy. Every SDT probe does a test on a global variable. If this lands, it will cause a noticeable performance impact, especially in high packet rate workloads. Can we shelve this until / unless SDT is modified to insert nops rather than do tests on a global variable? Or put this under its own options EXTRA_IP_PROBES or something?