Update to latest syslogd changes
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Advanced Search
Oct 2 2023
Aug 17 2023
In D41363#945202, @jfree wrote:In D41363#944911, @slw_zxy.spb.ru wrote:I am use syslogd for collectiong messages from many sourcses.
I have moderate message flow (about 1..10 messages/second average).
I am applay D41357, D41358, D41359, D41360, D41362 -- no problems.
I am applay D41363 -- after about 1..5 minutes syslogd stop to got and processing remote messages. Local messages still processing.
Most of remote messages don't recived and lost -- no action in kevent().
After D41363 syslogd is broken.I was able to reproduce this issue and it seems Mark was right, removing EV_CLEAR from kevent() flags, fixes the messages stalling.
Oddly enough, this bug doesn't occur when the messages are sent through syslogd's UDP socket locally. This only happens when the messages are coming from another host.
@slw_zxy.spb.ru, would you mind seeing if this new patch works for you?
Aug 16 2023
I am use syslogd for collectiong messages from many sourcses.
I have moderate message flow (about 1..10 messages/second average).
I am applay D41357, D41358, D41359, D41360, D41362 -- no problems.
I am applay D41363 -- after about 1..5 minutes syslogd stop to got and processing remote messages. Local messages still processing.
Most of remote messages don't recived and lost -- no action in kevent().
After D41363 syslogd is broken.
Aug 14 2023
In D41448#943957, @markj wrote:
cnt is size_t, an unsigned type, so this check is always false. It should be ssize_t.
auto-tuning by hw.usermem and use __XSTRING
Jul 6 2022
Jul 1 2022
Oct 27 2020
Is vi ready for preserver files not-complained to UTF?
Jun 6 2019
In D19094#443654, @avg wrote:Sorry, I myself went missing for a long while.
Yes, I can commit this change.Do you want anything specific to appear in a commit message?
Like any additional attributions, etc?
Mar 11 2019
In D19094#418092, @avg wrote:In D19094#417188, @slw_zxy.spb.ru wrote:No more replays?
Unfortunately, no.
I think that we can commit your proposed change. If George comes up with a different and better solution later on, there should be no problem switching to it.
Mar 7 2019
In D19094#412736, @avg wrote:In D19094#411932, @slw_zxy.spb.ru wrote:Do you succesefull contact George?
I've just got a reply from George.
He agrees with your analysis, but needs some more time to think about how to address the issue.
Let's wait a bit more.
Thanks!
Feb 19 2019
In D19094#410508, @avg wrote:Let me try to contact George again.
Feb 13 2019
In D19094#410239, @mav wrote:In D19094#410192, @slw_zxy.spb.ru wrote:Don't sure about calling remove_reference() from arc_hdr_alloc_pabd() (or from parallel tasks), but see at ARC MFU/MRU size calculation in arc_change_state() called from arc_access() and !GHOST_STATE(state) case in arc_get_data_impl() called from arc_hdr_alloc_pabd().
I mean interchange this lines can cause problems for this accountings.I am not sure what accounting problem you are talking about,
In D19094#410062, @mav wrote:While I see the problem you are fixing, the fix looks ugly to me, that is why I would look for something nicer.
Feb 6 2019
Aug 29 2018
- ARC don't rised, memory pressure does not arise, page daemon not activated.
The ARC is not growing after 8, but the ARC hit rate is too low. Why is it not growing? Is it because the free_memory < (arc_c >> arc_no_grow_shift) condition is true, or is there some other reason?
Aug 28 2018
In D7538#361135, @markj wrote:In D7538#359546, @slw_zxy.spb.ru wrote:To be clear, I'm just stating that r332365 changed zfs_arc_free_target to be equal vm_cnt.v_free_target. It looks to me that this is equivalent to the change you made to arc_available_memory(EXCLUDE_ZONE_CACHE), where v_free_target is referenced directly.
No.
arc_available_memory(EXCLUDE_ZONE_CACHE) check conditions for memory pressure, check how many free memory see by OS (and kmem cache not counted for this).
Yes, which is exactly what the computation freemem - zfs_arc_free_target is. If you expand these definitions, it is vm_cnt.v_free_count - vm_cnt.v_free_target, where v_free_count does not include UMA caches. When v_free_count < v_free_target, the system is under memory pressure, and the page daemon attempts to free pages until v_free_count >= v_free_target. In -CURRENT, you can think of needfree as being the same as v_free_target - v_free_target when this difference is positive. In stable branches this is not quite true.
Aug 23 2018
To be clear, I'm just stating that r332365 changed zfs_arc_free_target to be equal vm_cnt.v_free_target. It looks to me that this is equivalent to the change you made to arc_available_memory(EXCLUDE_ZONE_CACHE), where v_free_target is referenced directly.
In D7538#358922, @markj wrote:Sorry that this review has stalled lately. I would like to compare this patch to what's in -CURRENT, which has evolved a fair bit since the patch was updated. Once that picture is more clear, we can focus on stable/11.
Jun 18 2018
I have come to realise that there is another issue related to this, the default arc_max being wired ram that is not counted in max_wired means a default setup is allowed to wire more than the physical ram installed.
See my comment here for more explanation.
May 22 2018
Unified for -stable and current now
May 11 2018
Update to latest -STABLE changes
Sep 28 2017
In D7538#259903, @karl_denninger.net wrote:As some of you probably know I've been chasing this same general issue here: https://bugs.freebsd.org/bugzilla/show_bug.cgi?id=187594
I'm playing with this patch set now on 11.1-STABLE (r324056) and other than leaving a crazy amount of inactive pages outstanding (which never get reclaimed in many instances, thus pressuring ARC size to half or so of what it could otherwise be) it appears to behave well.
I think I've got a fix for that last issue, and this patch set is a more-elegant approach to the UMA bloat problem then I had come up with. I want to run my changes to this code for a few days before contributing my thoughts in the form of code, but the short version is that adding a pager wakeup somewhat above the low-memory threshold appears to resolve the "frozen" inactive page issue and, if that proves up, this patch set looks very good and somewhat-superior to the one I have been running for ofa while (and thus a better option.)
Mar 18 2017
- Restore lost code
- Conrtol (by sysctl) draining per-CPU UMA cache
- Skip first draining UMA cache for case of enough memory in caches.
- Don't wait drained memory accounted as free. Stop daining if expect as enough.
Mar 5 2017
Fix userland compilation
Mar 4 2017
Generalize zone travers algorithm and do some optimization on cache reclaim (arc_kmem_reap_now() costly and call only as last resort)
Mar 2 2017
Optimize zone processing
per-CPU cache must be drained before zone drain
Mar 1 2017
fix typo again
fix typo
- Check size of free items in zones (now eleminate false pressure to ARC by per-CPU zone cache)
- Do cleanup per-CPU zone cache after arc_kmem_reap_now() (immediatle available as free memory shrinked ARC cache)
Feb 14 2017
In D7538#198159, @avg wrote:In D7538#198150, @slw_zxy.spb.ru wrote:
- there needs to be a very good explanation of why we would want to calculate needfree in the proposed fashion
main goal: don't allow drop freemem below v_free_min
I think that the existing check handles that already.
- there needs to be a very good explanation of why we would want to calculate needfree in the proposed fashion
In D7538#198085, @avg wrote:
- I don't think that the needfree calculations in the patch are correct
- in illumos needfree means something entirely different from what's calculated in the patch
- there needs to be a very good explanation of why we would want to calculate needfree in the proposed fashion
Feb 13 2017
Oct 17 2016
I was testing this patch in 3 days.
No TCP related problems.
Oct 10 2016
I think imp also have reproduced this issue (https://lists.freebsd.org/pipermail/freebsd-stable/2016-September/085518.html)
Also report Urmas Lett <urmas.lett@eenet.ee> (email to jch)
Aug 17 2016
Illumos update needfree by kmem susbsystem. On FreeBSD emulate this in next way: