User Details
- User Since
- Jul 6 2023, 3:01 PM (70 w, 22 h)
Sep 23 2023
Sep 22 2023
Sep 10 2023
Thanks, Mark, I will send you two commits as requested. Thanks again for all your help!
Sep 9 2023
Reformat to ahere to suggested style...
Sep 8 2023
Jul 30 2023
Jul 29 2023
Refactor to more closely follow seqc.h.
Jul 28 2023
Updated to quasi-follow seqlock semantics from seqc.h for continued discussion...
Jul 27 2023
Jul 26 2023
Jul 25 2023
Are you saying that instead of aligning pthread_mutex allocations, we just need to ensure that allocation is padded to the end of the cache line? Then, perhaps the same thing needs to be done to rwlocks and spinlocks?
Fix how generation count is updated and synchronized with readers.
Add comments to explain how it works.
General observation. Overall I like this new feature! But I have a suspicion that using it for pthread mutexes might increase the lock/unlock latency a wee bit.
Jul 24 2023
General question: AFAICT, on amd64 _ _crt_malloc() seems to always return an address that is a power-of-two + 8 bytes. Why isn't it at least _ _alignof(max_align_t) aligned?
For example, it appears that malloc(3) always returns a suitably aligned allocation as long as the request is for more than 8 bytes.
Jul 20 2023
Use MACHINE_ABI to build only for 64-bit architectures.
Replace uint64_t with u_long for pshared hash table generation counts.
Jul 19 2023
Jul 7 2023
I reverted all the m_qidx changes, lost a wee bit of perf across all mutex flavors, but numbers still look pretty good.
Updated diff to provide full context via diff -U99999999 ...
Jul 6 2023
Thanks John, I've updated the patch as requested, including the tools/build/options change (I wasn't aware of that, pretty cool!)