Page MenuHomeFreeBSD

Eliminate kmem_arena in preparation for NUMA
ClosedPublic

Authored by jeff on Nov 21 2017, 10:05 PM.
Tags
None
Referenced Files
Unknown Object (File)
Feb 20 2024, 3:29 AM
Unknown Object (File)
Dec 21 2023, 9:04 PM
Unknown Object (File)
Dec 20 2023, 4:30 AM
Unknown Object (File)
Nov 22 2023, 7:46 PM
Unknown Object (File)
Nov 9 2023, 10:24 AM
Unknown Object (File)
Nov 6 2023, 6:12 PM
Unknown Object (File)
Oct 13 2023, 5:24 PM
Unknown Object (File)
Oct 8 2023, 9:20 AM
Subscribers

Details

Summary

This eliminates kmem_arena and kmem_object. The intent is to make the API simpler so in a later patch I can use a vmem per-domain to keep kva aligned with reservations.

I replaced the hard kmem virtual address limit with a soft limit in UMA. This allows the system to much more gracefully continue working once kmem is starved. When the soft limit is reached UMA will attempt to reduce KVA footprint by flushing caches once per-second as well as firing the lowmem handler.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

sys/vm/uma_core.c
852 ↗(On Diff #35567)

There are three instances of PAGE_SIZE * keg->uk_ppera that could be replaced with a local var as in keg_alloc_slab().

999 ↗(On Diff #35567)

Won't this count direct-mapped pages?

3170 ↗(On Diff #35567)

You could possibly use a separate mutex to synchronize wakeups instead of overloading uma_drain_lock. Then, uma_reclaim_wakeup() could use that mutex and we wouldn't have to wake up once per second to handle lost wakeups.

3261 ↗(On Diff #35567)

Extra newline.

sys/kern/subr_vmem.c
848 ↗(On Diff #35567)

return (ENOMEM);

sys/vm/uma_core.c
3181 ↗(On Diff #35567)

I thought about this solution.

I think I would prefer just maybe_yield() there instead of pause(). Or, only call pause when we are over the limit.

3252 ↗(On Diff #35567)

return ();

3258 ↗(On Diff #35567)

Blank line is required before.

sys/vm/uma_core.c
999 ↗(On Diff #35567)

I consider it something of a bug that they weren't considered in kmem limits before.

3170 ↗(On Diff #35567)

Notice there is no locking in uma_reclaim_wakeup(). Every allocation from uma to a backing store will trigger this function. I would prefer cheap but harmlessly racy synchronization.

3181 ↗(On Diff #35567)

Maybe yield only gives up the CPU. It'll still spin acquiring every lock in uma if you don't time limit it.

This revision is now accepted and ready to land.Nov 26 2017, 7:22 PM
This revision was automatically updated to reflect the committed changes.