Page MenuHomeFreeBSD

Fix a fasttrap race.
ClosedPublic

Authored by markj on Jun 5 2019, 5:21 PM.
Tags
None
Referenced Files
Unknown Object (File)
Tue, Dec 24, 1:13 PM
Unknown Object (File)
Nov 20 2024, 2:29 PM
Unknown Object (File)
Oct 25 2024, 9:05 PM
Unknown Object (File)
Oct 21 2024, 2:03 PM
Unknown Object (File)
Oct 2 2024, 7:51 AM
Unknown Object (File)
Sep 27 2024, 12:50 PM
Unknown Object (File)
Sep 26 2024, 9:10 PM
Unknown Object (File)
Sep 26 2024, 4:37 PM
Subscribers

Details

Summary

fasttrap hooks the user breakpoint handler. When no probes are enabled,
it sets the hook pointers to NULL to avoid unnecessary calls into
fasttrap. However, it was missing synchronization for the case where a
user thread hit a breakpoint, and the probe was disabled and hooks
cleared before the thread managed to load the hook.

Take advantage of the fact that interrupts are disabled upon entry to
the breakpointer handler to fix this: when handling a breakpoint, load
the hook pointer before enabling interrupts, and when disabling probes,
issue an SMP rendezvous after removing breakpoints and before clearing
the hook pointers. Note that we implement fasttrap only on i386 and amd64
currently.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

markj added reviewers: cem, kib.

LGTM.

sys/cddl/contrib/opensolaris/uts/common/dtrace/fasttrap.c
1137–1138 ↗(On Diff #58272)

Should these use atomic_store_ptr() or C11 equivalent?

This revision is now accepted and ready to land.Jun 5 2019, 6:48 PM
kib added inline comments.
sys/cddl/contrib/opensolaris/uts/common/dtrace/fasttrap.c
1137–1138 ↗(On Diff #58272)

I do not see much sense in this formal syntax. We require that any aligned pointers loads and stores were non-torn. I added relaxed atomics into machine/atomic.h only to avoid more questions 'where are atomics', and do not want them used for normal C programming.

sys/cddl/contrib/opensolaris/uts/common/dtrace/fasttrap.c
1137–1138 ↗(On Diff #58272)

I don't think it's necessary: the stores will not be reordered with the dtrace_sync() call, and the trapping thread must complete its load before dtrace_sync() will return. The atomic_load_ptr() may not be strictly necessary either, but I am not sure that the compiler won't reorder the load and enable_intr() call otherwise. Maybe the volatile qualifier in the enable_intr() definition is sufficient.

sys/cddl/contrib/opensolaris/uts/common/dtrace/fasttrap.c
1137–1138 ↗(On Diff #58272)

We require pointer load/store are not torn, but that doesn't prevent compiler or CPU reordering. The latter is not an x86 problem in general, I guess, but this is not x86 MD code and we do have a PPC implementation of dtrace.

I would guess the stores cannot be reordered before dtrace_sync(), and probably we are also bounded by mutex_exit() below, so it's possible that this is fine as is.

sys/cddl/contrib/opensolaris/uts/common/dtrace/fasttrap.c
1137–1138 ↗(On Diff #58272)

atomic_store_ptr() are relaxed in term of C11/C++11 memory model. They do not provide any visibility guarantee.

We do not have sequentially consistent atomic store/loads in machine/atomic.h at all.

sys/cddl/contrib/opensolaris/uts/common/dtrace/fasttrap.c
1137–1138 ↗(On Diff #58272)

Hence the suggestion to use the C11 stdatomic.h store instead.

sys/cddl/contrib/opensolaris/uts/common/dtrace/fasttrap.c
1137–1138 ↗(On Diff #58272)

There is no ABI for stdatomic.h or std::atomic. They could be incompatible between each other and out machine/atomic.h.

Generally kernel should use machine/atomic.h.

This revision was automatically updated to reflect the committed changes.