Page MenuHomeFreeBSD

vm_meter: Fix laundry accounting
ClosedPublic

Authored by markj on Oct 21 2024, 1:38 PM.
Tags
None
Referenced Files
F104601018: D47216.diff
Sun, Dec 8, 7:30 AM
Unknown Object (File)
Fri, Dec 6, 6:53 AM
Unknown Object (File)
Thu, Dec 5, 2:11 AM
Unknown Object (File)
Tue, Dec 3, 2:48 AM
Unknown Object (File)
Thu, Nov 28, 12:52 PM
Unknown Object (File)
Mon, Nov 25, 4:06 PM
Unknown Object (File)
Mon, Nov 25, 5:12 AM
Unknown Object (File)
Thu, Nov 21, 5:49 AM
Subscribers

Details

Summary

Pages in PQ_UNSWAPPABLE should be considered part of the laundry.
Otherwise, on systems with no swap, the total amount of memory visible
to tools like top(1) decreases.

It doesn't seem very useful to have a dedicated counter for unswappable
pages, and updating applications accordingly would be painful, so just
lump them in with laundry for now.

Diff Detail

Repository
rG FreeBSD src repository
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

markj requested review of this revision.Oct 21 2024, 1:38 PM

LGTM, it would be good to briefly note this change in the 'Physical Memory Stats' section of the top(1) manpage as well.

This revision is now accepted and ready to land.Oct 21 2024, 3:58 PM
kib added inline comments.
sys/vm/vm_meter.c
519–521

BTW this changed ABI, does 32bit top still work on 64bit host?

sys/vm/vm_meter.c
519–521

No, this is a bug, I missed that the old sysctl returned an int and not a uint64_t.

We eventually need to widen these sysctls to u_long at least, but it should be possible to provide backwards compat by looking at the size of the user-supplied buffer.

LGTM, it would be good to briefly note this change in the 'Physical Memory Stats' section of the top(1) manpage as well.

IMO the description there is already correct. Unswappable pages are still "queued for laundering", the fact that they live in a separate queue is just an implementation detail.

This revision now requires review to proceed.Oct 21 2024, 8:47 PM
This revision is now accepted and ready to land.Oct 21 2024, 9:23 PM
This revision was automatically updated to reflect the committed changes.

I noticed this problem something like ~2 years ago but never took the time to investigate nor report it. Thanks for the fix.

I fear that leaving unswappable pages in the "Laundry" count could confuse users. To me, "Laundry" has the connotation that you want to clean-up pages (by writing them to persistent media), so having a large amount of memory staying in "Laundry" forever (until completely freed) is disturbing.

I guess that @bnovkov's suggestion to amend top(1)'s page stems from a similar concern.

So, going forward, I tend to think that having a separate "Unswappable" category would be an improvement.

I fear that leaving unswappable pages in the "Laundry" count could confuse users. To me, "Laundry" has the connotation that you want to clean-up pages (by writing them to persistent media), so having a large amount of memory staying in "Laundry" forever (until completely freed) is disturbing.

That it's disturbing is not a bad thing. It means that the system might benefit from having a swap device.

I guess that @bnovkov's suggestion to amend top(1)'s page stems from a similar concern.

So, going forward, I tend to think that having a separate "Unswappable" category would be an improvement.

There are many utilities that want to gather info about memory usage from sysctls, so adding a new category is quite intrusive. The benefit should be worth the cost, but I don't yet see that it is.

That it's disturbing is not a bad thing.

Well, something disturbing takes energy, so should be worth the trouble.

It means that the system might benefit from having a swap device.

I agree. But then, maybe the name "Laundry" is not the proper one. "Dirty" seems better in this context (no action suggested now).

There are many utilities that want to gather info about memory usage from sysctls, so adding a new category is quite intrusive. The benefit should be worth the cost, but I don't yet see that it is.

In this precise case, the benefit is not to get questions/bug reports about "Laundry" never shrinking even under memory pressure. I already asked myself this very question some time ago, and I'll bet I'm not going to be the only one.

We should probably take advantage of the next category addition/change to revise the sysctls so that applications can actually enumerate all memory categories covering all allocated memory. E.g., having all these categories under a single sysctl() node would in large part alleviate the need of applications to be impacted by changes in category. That, and perhaps also outputting flags per category if relevant (like: immediately freeable). In other words, I'm suggesting to ponder at some point on a better interface lowering the cost of such changes.

That it's disturbing is not a bad thing.

Well, something disturbing takes energy, so should be worth the trouble.

It means that the system might benefit from having a swap device.

I agree. But then, maybe the name "Laundry" is not the proper one. "Dirty" seems better in this context (no action suggested now).

There are many utilities that want to gather info about memory usage from sysctls, so adding a new category is quite intrusive. The benefit should be worth the cost, but I don't yet see that it is.

In this precise case, the benefit is not to get questions/bug reports about "Laundry" never shrinking even under memory pressure. I already asked myself this very question some time ago, and I'll bet I'm not going to be the only one.

This can be addressed by making the documentation (e.g., for top(1)) more useful.

We should probably take advantage of the next category addition/change to revise the sysctls so that applications can actually enumerate all memory categories covering all allocated memory. E.g., having all these categories under a single sysctl() node would in large part alleviate the need of applications to be impacted by changes in category. That, and perhaps also outputting flags per category if relevant (like: immediately freeable). In other words, I'm suggesting to ponder at some point on a better interface lowering the cost of such changes.

The reason I changed it this way is that prior to PQ_UNSWAPPABLE, pages in the laundry that couldn't be swapped out would just be migrated endlessly between the other queues. An alternate solution would have been to just keep them at the end of the laundry queue. The fact that they ended up in a separate queue is just an implementation detail, and I don't want to codify it by adding top-level counters for pages in that state.

This can be addressed by making the documentation (e.g., for top(1)) more useful.

Yes, that would be great. That said, having a non-ambiguous (or at least, less ambiguous) name could even have alleviated the need for that documentation ("Dirty" is not perfect either, maybe "Modified"; still not sure). I certainly understand that changing this has a cost that is not worth paying now, but we may want to keep that in mind for the future.

An alternate solution would have been to just keep them at the end of the laundry queue. The fact that they ended up in a separate queue is just an implementation detail, and I don't want to codify it by adding top-level counters for pages in that state.

But which benefits would that alternative have had? At a glance, I'm mostly seeing drawbacks to it.

Irrespective of the implementation (although the particular one chosen certainly may complicate some reporting), I think it would make sense to report on memory that cannot be freed by the VM itself (with possible cooperation of other subsystems indicating that some memory is just cache that can be thrown away).

As for the other categories reported by top(1), they are anyway quite tied to the implementation. As an example, I'm not sure that distinguishing wired memory with unswappable one is useful to users/administrators.

The absence of a definitive answer here is also what prompted my suggestion to have some flexibility in changing what the kernel reports and having an interface for applications to cope easily (ideally) with these changes. I remember having had to patch some KDE memory reporting application almost 20 years ago to cope with changes in the sysctl reporting memory, so this problem is not new. Nor do I think it should stay one going forward.