this adds in vnlru_free in arc_prune_task and restores the default to balanced pruning on systems with more than 4G
Diff Detail
- Repository
- rS FreeBSD src repository - subversion
- Lint
Lint Skipped - Unit
Tests Skipped - Build Status
Buildable 24778
Event Timeline
sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c | ||
---|---|---|
560 | The description says 4GB. Where does the limit come from? The value should be wrapped in parens. | |
4130 | nr_scan is a misleading name. vnlru_free() will free the requested number of vnodes, and may scan more than that. | |
4133 | My reading of the ZoL code is that it tries to shrink all caches attached to the filesystem. vnlru_free() doesn't have the same effect, there are various UMA zones that you might want to try and exert pressure on as well. namei_zone for instance. Note that vnlru_proc() calls uma_reclaim() for this reason (though that is admittedly overkill). Maybe it's sufficient to just call vnlru_free(), but a comment should relate this to what happens on Linux. | |
4380 | Why apply this limit? |
sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c | ||
---|---|---|
560 | I misremembered, it's obviously 8G. The value is completely arbitrary. It's just that problems have been observed on low memory systems. But that may just have been an artifact of a buggy port. | |
4380 | The thinking is that for systems below some threshold we maintain the legacy behavior. If you think we can test it without and can suggest all the zones to apply pressure to and how to apply a bit more fine grained pressure that would certainly be better. |
sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c | ||
---|---|---|
4380 | Sure, I understand what the code is doing, but I don't know what problems you observed or why this threshold is supposed to make sense, so I have nothing useful to offer. There is no list of such zones or any way to be more fine-grained, though I'm hoping to fix the latter soon. That's why vnlru_proc() just calls uma_reclaim(). |