- Prefer atomics to the critical section.
- Use CPU_FOREACH to skip absent CPUs
Seems that a nearly-identical change was made for arm64 in r313345.
Differential D27536
riscv: small counter(9) improvements mhorne on Dec 10 2020, 3:17 PM. Authored by Tags None Referenced Files
Subscribers
Details
Seems that a nearly-identical change was made for arm64 in r313345.
Diff Detail
Event TimelineComment Actions To be clear, the main motivation here and in r313345 is to ensure that zeroing is reliable? In the old version of this diff it is not, per the XXX comment, but neither the CR description or the commit log message for r313345 mention this explicitly, so maybe I'm missing something else. Comment Actions Presumably you are asking for the exact meaning behind the XXXKIB comment? Could you explain for me what makes the existing code unreliable for zeroing? It's not obvious to me why the critical section was not enough to protect the increment. Comment Actions counter_u64_zero_inline() works by raising an interrupt on each CPU; the interrupt handler calls counter_u64_zero_one_cpu() to clear the counter for that CPU. In the old version of the diff this may fail to work because counter_u64_add() is not atomic with respect to interrupts. It loads the value into a register, modifies it, and stores the result. If the value is zeroed between the load and store, it will get reverted when the interrupt finishes and the interrupted thread continues running. I believe this is what the XXXKIB comment is stating. Comment Actions Thanks, that makes sense. counter_u64_zero_one being called via IPI is the detail I was forgetting. I can include a mention about the comment in the final commit message.
|