Page MenuHomeFreeBSD

Simplify lazy advance and decouple it somewhat from ticks so that it canbe forwarded after other broadcast interrupts.
ClosedPublic

Authored by jeff on Feb 24 2020, 11:41 PM.
Tags
None
Referenced Files
Unknown Object (File)
Tue, Nov 19, 1:08 PM
Unknown Object (File)
Fri, Nov 1, 4:32 PM
Unknown Object (File)
Oct 19 2024, 4:38 AM
Unknown Object (File)
Sep 16 2024, 3:57 AM
Unknown Object (File)
Sep 13 2024, 10:49 PM
Unknown Object (File)
Sep 12 2024, 6:20 PM
Unknown Object (File)
Sep 8 2024, 4:36 PM
Unknown Object (File)
Sep 8 2024, 7:13 AM
Subscribers

Details

Summary

This uses an atomic_cmpset_64 to simplify lazy_advance. This will allow me to advance lazy without advancing ticks which will be useful for blocking polls on lazy smrs.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

jeff retitled this revision from Simplify lazy advance and decouple it somewhat from ticks so that it can be forwarded after other broadcast interrupts. to Simplify lazy advance and decouple it somewhat from ticks so that it canbe forwarded after other broadcast interrupts..Feb 24 2020, 11:41 PM
jeff edited the summary of this revision. (Show Details)
jeff added reviewers: rlibby, markj.
jeff set the repository for this revision to rS FreeBSD src repository - subversion.
sys/kern/subr_smr.c
186 ↗(On Diff #68781)

This would have a bug if you acquired a spinlock and blocked the interrupt for 2 ticks without the processor flushing the store buffer.

This seems really unlikely but perhaps needs an assert or some other protection.

477 ↗(On Diff #68781)

I think ultimately poll will have some top-level driver function that handles the sleep/advance for deferred and lazy and this function can be simplified slightly.

So many polls don't require actual looping that it may be advantageous to make a small poll pre-check and return fast-path.

sys/sys/smr.h
59 ↗(On Diff #68781)

I feel like this is a little ugly but ok. If someone has another suggestion to ensure the grouping and alignment I'm all ears.

Looks good.

sys/kern/subr_smr.c
249 ↗(On Diff #68781)

(While looking at the generated code, I noticed amd64 atomic_cmpset forces us to populate a register with the return value of cmpset, even though we don't care. I'll see what kib thinks of a possible fix for that...)

sys/sys/smr.h
59 ↗(On Diff #68781)

I know that gcc explicitly blesses type punning with a union, but from what I was able to find, I think clang does not (e.g. https://reviews.llvm.org/D30538). However, we seem to use -fno-strict-aliasing, so I think we should be okay anyway. I did look at the (amd64) code generation and it looked fine as far as I was able to tell.

An alternative would be to _Static_assert the size, drop the union, and do the read/write like

uint64_t old, new;
old = atomic_load_acq_64(&s->s_wr);
memcpy(&s_wr, &old, sizeof());
...
memcpy(&new, &s_wr, sizeof());
atomic_cmpset_64(&s->s_wr, &old, new);

But one may find this to be even uglier.

This revision is now accepted and ready to land.Feb 27 2020, 10:37 AM
This revision was automatically updated to reflect the committed changes.