- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Oct 24 2018
Oct 15 2018
Sep 29 2018
Sep 18 2018
Aug 20 2018
Aug 17 2018
Aug 15 2018
Aug 12 2018
Jul 29 2018
Jul 21 2018
In D15985#339583, @gallatin wrote:I ran this on a Netflix 100g box. I observed no measurable difference in CPU time. So I think this patch is "neutral" for the perspective of our (mostly kernel) workload.
Jul 20 2018
Also, a thread that has an average runtime between wakeups that is much less than the batch time slice needs to be able to interrupt a totally CPU-bound thread.
Another potential advantage of the scheme that I suggested is that I think it should reduce thrashing in certain circumstances. For instance, if there are a bunch of cksum-like threads running in parallel, only a limited number of them will be pulled out of the time share queue. Once sufficient threads have pulled off that queue to fully occupy the CPU, no more will be started until some of the first bunch reach the end of their time slices. The current implementation will rapidly churn through the contents of the time share queue as each thread goes to sleep and triggers the next thread in the queue to be started.
An interesting approach might be to put non-interactive threads on the real-time queue if they are preempted or sleep/wakeup before they have consumed their entire time slice. This shouldn't change anything for a totally CPU-bound thread unless it gets preempted. For the cksum example, the thread would temporarily be treated more like a low-priority interactive thread until in manages to use up its time slice and gets put back on the timeshare queue where it will have to sit and wait for its turn to run again. In a situation like this, there could be more than one thread removed from the time share queue.
Jul 18 2018
In D16301#345971, @markj wrote:In D16301#345968, @alc wrote:How should I proceed?
How about asking @truckman to try a poudriere run on Ryzen with this change? He had reported that some port build failures stopped occurring with r329254.
Jun 25 2018
Jun 12 2018
Jun 3 2018
May 31 2018
May 28 2018
May 16 2018
Apr 24 2018
Apr 22 2018
Apr 18 2018
Apr 14 2018
Apr 7 2018
This patch works as well as the manual sysctl tuning experiment that I previously tried.
Mar 26 2018
Mar 24 2018
Mar 23 2018
Mar 22 2018
Mar 21 2018
Mar 20 2018
Mar 17 2018
Feb 28 2018
Feb 23 2018
Feb 22 2018
Feb 20 2018
Feb 17 2018
Feb 15 2018
I don't remember seeing any finalizer related crashes. The failures were mostly malloc-related and looked like they could be caused by arena corruption. I can try to dig up the logs later. I don't recall seeing any build failures on my FX CPU, but lang/go would almost always fail to build on Ryzen.
Feb 13 2018
As noted in my email comment, this patch appears to have resolved a number of randomish-looking ports build failures on my Ryzen machine, in particular lang/go and anything related to lang/guile.
The changes in this update are:
Feb 1 2018
Jan 31 2018
Jan 29 2018
Jan 28 2018
Jan 26 2018
Jan 24 2018
Jan 20 2018
Jan 19 2018
Jan 18 2018
Jan 16 2018
Dec 31 2017
Nov 27 2017
Nov 12 2017
Oct 26 2017
The changes looks good.
Oct 25 2017
There us a similar problem when doing tunneling. For instance, I have to use a gif tunnel that encapsulates IPv6 packets inside IPv4 packets that are sent to a remote 6rd gateway in order to get IPv6 connectivity to the outside world. It would be nice to have the option of peering into these encapsulated packets to see the individual flows. Care would have to be taken to make sure that we don't walk off the end of the mbuf when doing this.
Oct 24 2017
Oct 20 2017
Sep 30 2017
Sep 29 2017
Sep 21 2017
Ping
Sep 15 2017
Incorporate trysteal improvement from Jeff Roberson @ Isilon
Sep 8 2017
In D12130#254431, @mav wrote:At high context switching rates on systems with many cores stealing code does create significant CPU load. So I can believe that the critical section can indeed be an issue, so this way may be good to go. I personally thought about limiting maximal stealing distance based on some statistic factors, such as context switch rates, etc, since we shouldn't probably touch every CPU caches on every context switch -- its a dead end with growing number of CPUs, but haven't got far on it.
Sep 7 2017
Builds fine and doesn't appear to break anything on my Ryzen machine.