Page MenuHomeFreeBSD

Fix a fasttrap race.

Authored by markj on Jun 5 2019, 5:21 PM.



fasttrap hooks the user breakpoint handler. When no probes are enabled,
it sets the hook pointers to NULL to avoid unnecessary calls into
fasttrap. However, it was missing synchronization for the case where a
user thread hit a breakpoint, and the probe was disabled and hooks
cleared before the thread managed to load the hook.

Take advantage of the fact that interrupts are disabled upon entry to
the breakpointer handler to fix this: when handling a breakpoint, load
the hook pointer before enabling interrupts, and when disabling probes,
issue an SMP rendezvous after removing breakpoints and before clearing
the hook pointers. Note that we implement fasttrap only on i386 and amd64

Diff Detail

rS FreeBSD src repository
Automatic diff as part of commit; lint not applicable.
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

markj created this revision.Jun 5 2019, 5:21 PM
markj edited the summary of this revision. (Show Details)Jun 5 2019, 5:22 PM
markj added reviewers: cem, kib.
cem accepted this revision.Jun 5 2019, 6:48 PM


1137–1138 ↗(On Diff #58272)

Should these use atomic_store_ptr() or C11 equivalent?

This revision is now accepted and ready to land.Jun 5 2019, 6:48 PM
kib accepted this revision.Jun 5 2019, 7:19 PM
kib added inline comments.
1137–1138 ↗(On Diff #58272)

I do not see much sense in this formal syntax. We require that any aligned pointers loads and stores were non-torn. I added relaxed atomics into machine/atomic.h only to avoid more questions 'where are atomics', and do not want them used for normal C programming.

markj added inline comments.Jun 5 2019, 7:24 PM
1137–1138 ↗(On Diff #58272)

I don't think it's necessary: the stores will not be reordered with the dtrace_sync() call, and the trapping thread must complete its load before dtrace_sync() will return. The atomic_load_ptr() may not be strictly necessary either, but I am not sure that the compiler won't reorder the load and enable_intr() call otherwise. Maybe the volatile qualifier in the enable_intr() definition is sufficient.

cem added inline comments.Jun 5 2019, 8:18 PM
1137–1138 ↗(On Diff #58272)

We require pointer load/store are not torn, but that doesn't prevent compiler or CPU reordering. The latter is not an x86 problem in general, I guess, but this is not x86 MD code and we do have a PPC implementation of dtrace.

I would guess the stores cannot be reordered before dtrace_sync(), and probably we are also bounded by mutex_exit() below, so it's possible that this is fine as is.

kib added inline comments.Jun 5 2019, 8:27 PM
1137–1138 ↗(On Diff #58272)

atomic_store_ptr() are relaxed in term of C11/C++11 memory model. They do not provide any visibility guarantee.

We do not have sequentially consistent atomic store/loads in machine/atomic.h at all.

cem added inline comments.Jun 5 2019, 8:29 PM
1137–1138 ↗(On Diff #58272)

Hence the suggestion to use the C11 stdatomic.h store instead.

kib added inline comments.Jun 5 2019, 8:34 PM
1137–1138 ↗(On Diff #58272)

There is no ABI for stdatomic.h or std::atomic. They could be incompatible between each other and out machine/atomic.h.

Generally kernel should use machine/atomic.h.

This revision was automatically updated to reflect the committed changes.