Active Repositories
Recent Activity
Today
In D46899#1070761, @olce wrote:In D46899#1070434, @jamie wrote:Currently, it seems to suggest (being a jailsys parameter) that there's some sort of valid "new" or "deleted" state for MAC inside the jail.
Not sure what you mean here. SYSCTL_JAIL_PARAM_NODE() just declares the common MAC sub-node. mac_do(4) will then indeed use the new SYSCTL_JAIL_PARAM_SYS_SUBNODE() for the mac.do jail parameter "node", effectively intended to be a jailsys one.
It would make sense for the exec.clean parameter to apply to the config execution. Bit of a chicken and egg problem there, but there's still the "-l" flag.
I'm torn on this, but I'm going to punt to declutter my life
There's problems with this approach, since it relies on ties.
I still like this idea, but I think that I'm just going to punt on this detail
Yeah, my ideal irq rate/queue is < 1000 . We mostly use Chelsio and Mellanox NICs that can do super aggressive irq coalescing without freaking out TCP due to using RX timestamps. Super aggressive coalescing like this lets us build packet trains in excess of 1000 packets to feed to lro via RSS assisted LRO, and we actually have useful LRO on internet workloads with tens of thousands of TCP connections per-queue. That reminds me that I should port RSS assisted LRO to iflib (eg, lro_queue_mbuf()).
A different variation of this landed as part of 3208a189c1e2c4ef35daa432fe45629a043d7047
This was folded in with D35917 and landed as ad759c73522ef
In D30155#1074682, @gallatin wrote:In D30155#1074005, @kbowling wrote:In D30155#1073987, @gallatin wrote:In D30155#1073639, @kbowling wrote:@imp @gallatin if you are able to test your workload, setting this to 1 and 2 would be new behavior versus where you are currently:
I can pull this into our tree and make an image for @dhw to run on the A/B cluster. However, we're not using this hardware very much any more, and there is only 1 pair of machines using it in the A/B cluster. Lmk if you're still interested, and I'll try to build the image tomorrow so that David can test it at his leisure.
Sure, it sounds like that is only enough for one experiment so I would focus on the default algorithm the patch will boot with sysctl dev.ix.<N>.enable_aim=1
Its running now. Eyeballing command-line utilities, the CPU is about 5% higher (27% -> 32%) and we have 2x the irq rate (110k vs 55k irq/sec).
When applying this, I wanted to give it a fair shake, and disabled this tunable: hw.ix.max_interrupt_rate=4000. Perhaps that was a mistake? Is there a runtime way to tweak the algorithm so it doesn't interrupt so fast under this level of load?
In D30155#1074005, @kbowling wrote:In D30155#1073987, @gallatin wrote:In D30155#1073639, @kbowling wrote:@imp @gallatin if you are able to test your workload, setting this to 1 and 2 would be new behavior versus where you are currently:
I can pull this into our tree and make an image for @dhw to run on the A/B cluster. However, we're not using this hardware very much any more, and there is only 1 pair of machines using it in the A/B cluster. Lmk if you're still interested, and I'll try to build the image tomorrow so that David can test it at his leisure.
Sure, it sounds like that is only enough for one experiment so I would focus on the default algorithm the patch will boot with sysctl dev.ix.<N>.enable_aim=1
rebasing with minor additional changes before I begin the work to split this up
85f2095d4d7e2e981d5535448d9b874307a2dae5 landed this.
Sorry I didn't spot this / comment until after commit -- shouldn't this go to stderr? perhaps warnx?
Yesterday
OBE. Even when I had this in, it only helped a little and failure is more nuanced than this patch contemplates.
This turns out not to be needed for the thing the vendor was telling me to do. Their firmware didn't behave like they thought it would, and the solution is along a different path. So, rather than add a useless feature, I'm punting.
OBE long agi. While this still applies, it's kinda useless. The bugs it was chasing have been retired and what's left is in no shape to commit.