- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Dec 9 2019
Dec 8 2019
Dec 5 2019
Nov 14 2019
Nov 10 2019
Nov 6 2019
Oct 15 2019
Oct 7 2019
Oct 3 2019
Sep 28 2019
Sep 20 2019
Sep 19 2019
Jul 5 2019
Jun 26 2019
Jun 7 2019
Mar 29 2019
Feb 20 2019
Jan 8 2019
Nov 25 2018
Nov 24 2018
Nov 23 2018
Nov 21 2018
Nov 19 2018
Nov 18 2018
Nov 15 2018
Nov 7 2018
Oct 24 2018
Oct 15 2018
Sep 29 2018
Sep 18 2018
Aug 20 2018
Aug 17 2018
Aug 15 2018
Aug 12 2018
Jul 29 2018
Jul 21 2018
In D15985#339583, @gallatin wrote:I ran this on a Netflix 100g box. I observed no measurable difference in CPU time. So I think this patch is "neutral" for the perspective of our (mostly kernel) workload.
Jul 20 2018
Also, a thread that has an average runtime between wakeups that is much less than the batch time slice needs to be able to interrupt a totally CPU-bound thread.
Another potential advantage of the scheme that I suggested is that I think it should reduce thrashing in certain circumstances. For instance, if there are a bunch of cksum-like threads running in parallel, only a limited number of them will be pulled out of the time share queue. Once sufficient threads have pulled off that queue to fully occupy the CPU, no more will be started until some of the first bunch reach the end of their time slices. The current implementation will rapidly churn through the contents of the time share queue as each thread goes to sleep and triggers the next thread in the queue to be started.
An interesting approach might be to put non-interactive threads on the real-time queue if they are preempted or sleep/wakeup before they have consumed their entire time slice. This shouldn't change anything for a totally CPU-bound thread unless it gets preempted. For the cksum example, the thread would temporarily be treated more like a low-priority interactive thread until in manages to use up its time slice and gets put back on the timeshare queue where it will have to sit and wait for its turn to run again. In a situation like this, there could be more than one thread removed from the time share queue.
Jul 18 2018
In D16301#345971, @markj wrote:In D16301#345968, @alc wrote:How should I proceed?
How about asking @truckman to try a poudriere run on Ryzen with this change? He had reported that some port build failures stopped occurring with r329254.
Jun 25 2018
Jun 12 2018
Jun 3 2018
May 31 2018
May 28 2018
May 16 2018
Apr 24 2018
Apr 22 2018
Apr 18 2018
Apr 14 2018
Apr 7 2018
This patch works as well as the manual sysctl tuning experiment that I previously tried.
Mar 26 2018
Mar 24 2018
Mar 23 2018
Mar 22 2018
Mar 21 2018
Mar 20 2018
Mar 17 2018
Feb 28 2018
Feb 23 2018
Feb 22 2018
Feb 20 2018
Feb 17 2018
Feb 15 2018
I don't remember seeing any finalizer related crashes. The failures were mostly malloc-related and looked like they could be caused by arena corruption. I can try to dig up the logs later. I don't recall seeing any build failures on my FX CPU, but lang/go would almost always fail to build on Ryzen.
Feb 13 2018
As noted in my email comment, this patch appears to have resolved a number of randomish-looking ports build failures on my Ryzen machine, in particular lang/go and anything related to lang/guile.
The changes in this update are: