Page MenuHomeFreeBSD

(umaperf 5/7) Use per-domain keg locks.
ClosedPublic

Authored by jeff on Dec 15 2019, 11:39 PM.
Tags
None
Referenced Files
Unknown Object (File)
Sat, Nov 23, 9:26 AM
Unknown Object (File)
Nov 12 2024, 8:08 PM
Unknown Object (File)
Nov 10 2024, 1:35 PM
Unknown Object (File)
Nov 10 2024, 5:04 AM
Unknown Object (File)
Nov 10 2024, 5:04 AM
Unknown Object (File)
Nov 3 2024, 2:56 PM
Unknown Object (File)
Oct 31 2024, 8:46 PM
Unknown Object (File)
Oct 31 2024, 3:35 AM
Subscribers

Details

Summary

This is part of a series of patches intended to enable first-touch numa policies for UMA by default. It also reduces the cost of uma_zalloc/zfree by approximately 30% each in my tests.

This locks each domain in the keg independently. Most keg fields that were non-domain were already read-only. uk_pages and uk_free become ud_pages and ud_free which is slightly annoying but not problematic. This allows your capacity to drain buckets to scale up with the number of nodes.

Because hash uses the keg lock and for simplicity elsewhere we force all !NUMA domains to use keg domain 0. It would be possible to limit this impact only to hash zones.

Diff Detail

Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 28294
Build 26404: arc lint + arc unit

Event Timeline

lib/libmemstat/memstat_uma.c
459

kread() return positive error values.

478

kread() returns 0 upon success.

sys/vm/uma_core.c
1197

n isn't being incremented anywhere.

1939

This comment is kind of misleading, it sounds like you want to check whether OFFPAGE was specified at keg initialization time.

3246

This is not a new problem with this patch, but I don't see how we ever get out of this loop when M_NOWAIT is specified and we are not doing a round-robin allocation. I think I broke this in r339686. Assuming I am not missing something I will fix it shortly.

Review feedback. bugs, etc.

Also store per-domain information for kegs in round-robin domains.

lib/libmemstat/memstat_uma.c
478

This is still wrong?

sys/vm/uma_int.h
257

Why pad the lock when the domain structure itself is padded?

sys/vm/uma_int.h
257

spinners slow down the lock owner, increasing hold time. For crossdomain free there can be a lot of contention on the keg locks.

This revision is now accepted and ready to land.Dec 23 2019, 11:03 PM

Looks good but I do have a question about how uma_zone_reserve() is supposed to work now.

sys/vm/uma_core.c
1258–1259

Now it inserts the slab into the partial list.

1939

I think it's the flag that's misleading, more than the comment? I can take this as a note for flag cleanup: maybe these checks should be turned into UMA_ZONE_NOTPAGE (or whatever the spelling will be for the public API flag).

2399–2400

s/%ud/%u/

3198–3199

Just wanted to check that you intended how this changes the behavior for allocs with a reserve set. Now we keep the reserve on each of n domains. This should be fine as long as reserve are small. An alternative could be to distribute them through the domains ((howmany(reserve, ndomains) each), which I think should be fine as long as M_USE_RESERVE allocs are willing to try the other domains too.

sys/vm/uma_core.c
3198–3199

Yeah the only user in tree is the vmem btag zone and the reserve is only set when there is no UMA_MD_SMALL_ALLOC. So it'll only be on non-numa machines really. There are something like 8 tags reserved per-cpu. So it's not a trivial amount of memory but in context it's not something I'm worried about.

Looks good, after fixing up the keg_alloc_slab comment and the keg_dtor printf format.

sys/vm/uma_core.c
3198–3199

Okay.