Page MenuHomeFreeBSD

Update zfs_arc_free_target after r329882.
ClosedPublic

Authored by markj on Apr 6 2018, 8:38 PM.
Tags
None
Referenced Files
Unknown Object (File)
Dec 21 2023, 2:30 PM
Unknown Object (File)
Dec 20 2023, 1:19 AM
Unknown Object (File)
Nov 15 2023, 8:38 AM
Unknown Object (File)
Oct 30 2023, 6:41 AM
Unknown Object (File)
Aug 28 2023, 2:59 AM
Unknown Object (File)
Aug 15 2023, 12:49 PM
Unknown Object (File)
Jul 7 2023, 3:05 PM
Unknown Object (File)
May 14 2023, 10:34 PM

Details

Summary

With that change, the page daemon will reclaim pages if the free page
count for the domain drops below the free target. In particular, it is
now unlikely for v_free_count to drop below zfs_arc_free_target in
ordinary circumstances. Update zfs_arc_free_target accordingly.

Test Plan

Don tested this change and found that it addressed the ARC
backpressure issue which appeared after r329882. I don't
think this is a complete solution, but it's an improvement over
the current behaviour.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

markj added reviewers: avg, mav, jeff, delphij.
markj edited subscribers, added: truckman; removed: delphij.

This patch works as well as the manual sysctl tuning experiment that I previously tried.

sys/cddl/contrib/opensolaris/uts/common/fs/zfs/arc.c
392 ↗(On Diff #41207)

There is an inherent problem when attempting to balance multiple competing caches. Which one do we reclaim from? Ideally you'd want to keep the pages that were most likely to be re-used regardless of where they exist. Unfortunately that is impossible to determine. We have this same problem with UMA. Does it make more sense to page out clean file pages or throw away cached kernel memory? On costs I/O, the other potentially a large amount of CPU due to lock contention.

What would a better solution be? From research papers I see little value in the ARC over our own page cache. It does loop detection and some other minor cache improvements In the age of SSD these are increasingly irrelevant for high performance systems. I haven't looked at it feature wise but I think placing arc data in the inactive queue to be recycled like the buf cache would be preferable. You could do it like the VMIO integration in the buf cache where you statically keep a small subset pinned and then let the rest float. Since the arc is physically organized you could leave evicted blocks on the devvps or construct some pseudo device vm objects for the purpose.

In the absence of some better solution we have two back-pressure mechanisms that need to coordinate, inactive queue, and arc. With the pid controller the free page count is not likely to often dip below v_free_target unless consumption ramps very quickly or the inactive queue is exhausted. We probably want to keep some balance between inactive queue processing for free pages and arc eviction for free pages.

Possibly this would be better served by an additional event handler. Rather than lowmem which only happens when we're low, this would be a hook on every pageout. This would at least allow us to synchronize the two processes and let each give up a fraction.

In the absence of that, and recognizing that we need a temporary fix, I would set the arc_free_target at a percent or so above the free_target because in reality the pid controller will raise the free_target dynamically according to consumption.

This revision was not accepted when it landed; it landed in state Needs Review.Apr 10 2018, 1:56 PM
This revision was automatically updated to reflect the committed changes.