Changeset View
Standalone View
sys/kern/kern_malloc.c
Show First 20 Lines • Show All 526 Lines • ▼ Show 20 Lines | if ((malloc_nowait_count % malloc_failure_rate) == 0) { | ||||
atomic_add_int(&malloc_failure_count, 1); | atomic_add_int(&malloc_failure_count, 1); | ||||
*vap = NULL; | *vap = NULL; | ||||
return (EJUSTRETURN); | return (EJUSTRETURN); | ||||
} | } | ||||
} | } | ||||
#endif | #endif | ||||
if (flags & M_WAITOK) { | if (flags & M_WAITOK) { | ||||
KASSERT(curthread->td_intr_nesting_level == 0, | KASSERT(curthread->td_intr_nesting_level == 0, | ||||
("malloc(M_WAITOK) in interrupt context")); | ("malloc(M_WAITOK) in interrupt context")); | ||||
markj: BTW, I am not sure why this assertion is predicated on M_WAITOK being set. It should always be… | |||||
Done Inline ActionsI'm not necessarily seeing where this assertion is useful at all, to be honest. I know nothing about the different kinds of interrupt handlers, but almost everywhere that touches td_intr_nesting_level[0] also puts us in a critical section to hit the KASSERT just below this block. [0] the exception seems to be that ipi_bitmap_handler() over in x86/x86/mp_x86.c will only do so for hardclock. Again, not familiar, but that seems to lead to an inconsistency in expectations for all of the other ipi handlers w.r.t. malloc(9) (and maybe others?), but I would suspect that it doesn't matter at all in practice as they shouldn't be allocating anything. kevans: I'm not necessarily seeing where this assertion is useful at all, to be honest. I know nothing… | |||||
Not Done Inline ActionsThere is also smp_rendezvous_action(), which gets called from an interrupt handler without bumping intr_nesting_level but enters a critical section. In general I wouldn't assume that td_intr_nesting_level != 0 implies that td_critnest != 0. IMO we should simply add this condition to the assertion below, i.e., (curthread->td_critnest != 0 && td->td_intr_nesting_level) || SCHEDULER_STOPPED(). But of course this isn't directly related to the diff at hand. markj: There is also smp_rendezvous_action(), which gets called from an interrupt handler without… | |||||
if (__predict_false(!THREAD_CAN_SLEEP())) { | if (__predict_false(!THREAD_CAN_SLEEP())) { | ||||
#ifdef EPOCH_TRACE | #ifdef EPOCH_TRACE | ||||
epoch_trace_list(curthread); | epoch_trace_list(curthread); | ||||
#endif | #endif | ||||
KASSERT(1, | KASSERT(0, | ||||
("malloc(M_WAITOK) with sleeping prohibited")); | ("malloc(M_WAITOK) with sleeping prohibited")); | ||||
} | } | ||||
} | } | ||||
KASSERT(curthread->td_critnest == 0 || SCHEDULER_STOPPED(), | KASSERT(curthread->td_critnest == 0 || SCHEDULER_STOPPED(), | ||||
("malloc: called with spinlock or critical section held")); | ("malloc: called with spinlock or critical section held")); | ||||
#ifdef DEBUG_MEMGUARD | #ifdef DEBUG_MEMGUARD | ||||
if (memguard_cmp_mtp(mtp, *sizep)) { | if (memguard_cmp_mtp(mtp, *sizep)) { | ||||
▲ Show 20 Lines • Show All 975 Lines • Show Last 20 Lines |
BTW, I am not sure why this assertion is predicated on M_WAITOK being set. It should always be true, I'd think.