Changeset View
Standalone View
sys/kern/subr_smp.c
Show First 20 Lines • Show All 923 Lines • ▼ Show 20 Lines | |||||
int | int | ||||
quiesce_all_cpus(const char *wmesg, int prio) | quiesce_all_cpus(const char *wmesg, int prio) | ||||
{ | { | ||||
return quiesce_cpus(all_cpus, wmesg, prio); | return quiesce_cpus(all_cpus, wmesg, prio); | ||||
} | } | ||||
static void | |||||
cpus_fence_seq_cst_issue(void *arg __unused) | |||||
{ | |||||
atomic_thread_fence_seq_cst(); | |||||
} | |||||
void | |||||
cpus_fence_seq_cst(void) | |||||
jeff: Can we not replace quiesce_all_cpus with this mechanism entirely? | |||||
Done Inline ActionsI noted that no (or at least not the way it is used right now). With lockprof out of the way you are left with ktrace. It would have to be modified to provide some form of "no longer in ktrace code" point, just like I'm abusing critical sections for lockprof. Should a more general (and working) approach be desired (e.g. the one the current code claims to provide) the fix proposed by kib (high prio threads) would do the trick, but would also be too expensive for users like this one. Finally, even if this is to become a part of a general solution later on, it will still have to work separately like here to not induce extra overhead if it can be avoided. I note once again that part of the motivation here is to be able to periodically dump stats under load with minimal extra disruption. mjg: I noted that no (or at least not the way it is used right now).
With lockprof out of the way… | |||||
{ | |||||
#ifdef SMP | |||||
smp_rendezvous( | |||||
smp_no_rendezvous_barrier, | |||||
Not Done Inline ActionsThe indent is wrong, and lines can be packed more tight. kib: The indent is wrong, and lines can be packed more tight. | |||||
Done Inline ActionsIt's copy-pasted from rmlocks, I did not realize I reindented later. 4 is fine? mjg: It's copy-pasted from rmlocks, I did not realize I reindented later. 4 is fine? | |||||
cpus_fence_seq_cst_issue, | |||||
smp_no_rendezvous_barrier, | |||||
NULL | |||||
); | |||||
#else | |||||
cpus_fence_seq_cst_issue(NULL); | |||||
#endif | |||||
} | |||||
/* Extra care is taken with this sysctl because the data type is volatile */ | /* Extra care is taken with this sysctl because the data type is volatile */ | ||||
static int | static int | ||||
sysctl_kern_smp_active(SYSCTL_HANDLER_ARGS) | sysctl_kern_smp_active(SYSCTL_HANDLER_ARGS) | ||||
{ | { | ||||
int error, active; | int error, active; | ||||
active = smp_started; | active = smp_started; | ||||
error = SYSCTL_OUT(req, &active, sizeof(active)); | error = SYSCTL_OUT(req, &active, sizeof(active)); | ||||
▲ Show 20 Lines • Show All 260 Lines • Show Last 20 Lines |
Can we not replace quiesce_all_cpus with this mechanism entirely?