Changeset View
Standalone View
sys/kern/subr_sleepqueue.c
Show All 20 Lines | |||||
* HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT | * HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT | ||||
* LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY | * LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY | ||||
* OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF | * OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF | ||||
* SUCH DAMAGE. | * SUCH DAMAGE. | ||||
*/ | */ | ||||
/* | /* | ||||
* Implementation of sleep queues used to hold queue of threads blocked on | * Implementation of sleep queues used to hold queue of threads blocked on | ||||
* a wait channel. Sleep queues different from turnstiles in that wait | * a wait channel. Sleep queues are different from turnstiles in that wait | ||||
* channels are not owned by anyone, so there is no priority propagation. | * channels are not owned by anyone, so there is no priority propagation. | ||||
* Sleep queues can also provide a timeout and can also be interrupted by | * Sleep queues can also provide a timeout and can also be interrupted by | ||||
* signals. That said, there are several similarities between the turnstile | * signals. That said, there are several similarities between the turnstile | ||||
* and sleep queue implementations. (Note: turnstiles were implemented | * and sleep queue implementations. (Note: turnstiles were implemented | ||||
* first.) For example, both use a hash table of the same size where each | * first.) For example, both use a hash table of the same size where each | ||||
* bucket is referred to as a "chain" that contains both a spin lock and | * bucket is referred to as a "chain" that contains both a spin lock and | ||||
* a linked list of queues. An individual queue is located by using a hash | * a linked list of queues. An individual queue is located by using a hash | ||||
* to pick a chain, locking the chain, and then walking the chain searching | * to pick a chain, locking the chain, and then walking the chain searching | ||||
* for the queue. This means that a wait channel object does not need to | * for the queue. This means that a wait channel object does not need to | ||||
* embed it's queue head just as locks do not embed their turnstile queue | * embed its queue head just as locks do not embed their turnstile queue | ||||
* head. Threads also carry around a sleep queue that they lend to the | * head. Threads also carry around a sleep queue that they lend to the | ||||
* wait channel when blocking. Just as in turnstiles, the queue includes | * wait channel when blocking. Just as in turnstiles, the queue includes | ||||
* a free list of the sleep queues of other threads blocked on the same | * a free list of the sleep queues of other threads blocked on the same | ||||
* wait channel in the case of multiple waiters. | * wait channel in the case of multiple waiters. | ||||
* | * | ||||
* Some additional functionality provided by sleep queues include the | * Some additional functionality provided by sleep queues include the | ||||
* ability to set a timeout. The timeout is managed using a per-thread | * ability to set a timeout. The timeout is managed using a per-thread | ||||
* callout that resumes a thread if it is asleep. A thread may also | * callout that resumes a thread if it is asleep. A thread may also | ||||
▲ Show 20 Lines • Show All 506 Lines • ▼ Show 20 Lines | if (td->td_sleepqueue != NULL) { | ||||
return; | return; | ||||
} | } | ||||
/* | /* | ||||
* If TDF_TIMEOUT is set, then our sleep has been timed out | * If TDF_TIMEOUT is set, then our sleep has been timed out | ||||
* already but we are still on the sleep queue, so dequeue the | * already but we are still on the sleep queue, so dequeue the | ||||
* thread and return. | * thread and return. | ||||
*/ | */ | ||||
if (td->td_flags & TDF_TIMEOUT) { | if (td->td_flags & TDF_TIMEOUT) { | ||||
kib: I think this is a right place to provide some reasoning why the code correctly covers the race… | |||||
Done Inline ActionsWill do. vangyzen: Will do. | |||||
MPASS(TD_ON_SLEEPQ(td)); | MPASS(TD_ON_SLEEPQ(td)); | ||||
sq = sleepq_lookup(wchan); | sq = sleepq_lookup(wchan); | ||||
if (sleepq_resume_thread(sq, td, 0)) { | if (sleepq_resume_thread(sq, td, 0)) { | ||||
#ifdef INVARIANTS | #ifdef INVARIANTS | ||||
/* | /* | ||||
* This thread hasn't gone to sleep yet, so it | * This thread hasn't gone to sleep yet, so it | ||||
Not Done Inline ActionsI think it is a prereq for the race that Ts is not put to sleep yet. In fact, there is very fine difference between TD_ON_SLEEPQ() which means that sleepq_add() was done, and TD_IS_SLEEPING(), which mean that the thread is really asleep. TD_IS_SLEEPING() condition is set as the last action in sleepq_switch() (this function) right before the call to mi_switch(). When I asked for the comment, I mean just mention this race and also to explain how the use of sleep chain spinlocks makes the td_rtcgen accesses correctly synchronized. kib: I think it is a prereq for the race that Ts is not put to sleep yet. In fact, there is very… | |||||
* should not be swapped out. | * should not be swapped out. | ||||
*/ | */ | ||||
panic("not waking up swapper"); | panic("not waking up swapper"); | ||||
#endif | #endif | ||||
} | } | ||||
mtx_unlock_spin(&sc->sc_lock); | mtx_unlock_spin(&sc->sc_lock); | ||||
return; | return; | ||||
Done Inline ActionsAs I asked before, please commit the style and comment grammar fixes now. kib: As I asked before, please commit the style and comment grammar fixes now. | |||||
} | } | ||||
#ifdef SLEEPQUEUE_PROFILING | #ifdef SLEEPQUEUE_PROFILING | ||||
if (prof_enabled) | if (prof_enabled) | ||||
sleepq_profile(td->td_wmesg); | sleepq_profile(td->td_wmesg); | ||||
#endif | #endif | ||||
MPASS(td->td_sleepqueue == NULL); | MPASS(td->td_sleepqueue == NULL); | ||||
sched_sleep(td, pri); | sched_sleep(td, pri); | ||||
thread_lock_set(td, &sc->sc_lock); | thread_lock_set(td, &sc->sc_lock); | ||||
▲ Show 20 Lines • Show All 300 Lines • ▼ Show 20 Lines | sleepq_signal(void *wchan, int flags, int pri, int queue) | ||||
wakeup_swapper = sleepq_resume_thread(sq, besttd, pri); | wakeup_swapper = sleepq_resume_thread(sq, besttd, pri); | ||||
thread_unlock(besttd); | thread_unlock(besttd); | ||||
return (wakeup_swapper); | return (wakeup_swapper); | ||||
} | } | ||||
/* | /* | ||||
* Resume all threads sleeping on a specified wait channel. | * Resume all threads sleeping on a specified wait channel. | ||||
*/ | */ | ||||
int | int | ||||
Done Inline ActionsStyle(9) requires a blank line after the '{' if no local vars is defined. kib: Style(9) requires a blank line after the '{' if no local vars is defined. | |||||
sleepq_broadcast(void *wchan, int flags, int pri, int queue) | sleepq_broadcast(void *wchan, int flags, int pri, int queue) | ||||
{ | { | ||||
struct sleepqueue *sq; | struct sleepqueue *sq; | ||||
struct thread *td, *tdn; | struct thread *td, *tdn; | ||||
int wakeup_swapper; | int wakeup_swapper; | ||||
CTR2(KTR_PROC, "sleepq_broadcast(%p, %d)", wchan, flags); | CTR2(KTR_PROC, "sleepq_broadcast(%p, %d)", wchan, flags); | ||||
KASSERT(wchan != NULL, ("%s: invalid NULL wait channel", __func__)); | KASSERT(wchan != NULL, ("%s: invalid NULL wait channel", __func__)); | ||||
MPASS((queue >= 0) && (queue < NR_SLEEPQS)); | MPASS((queue >= 0) && (queue < NR_SLEEPQS)); | ||||
sq = sleepq_lookup(wchan); | sq = sleepq_lookup(wchan); | ||||
if (sq == NULL) | if (sq == NULL) | ||||
return (0); | return (0); | ||||
KASSERT(sq->sq_type == (flags & SLEEPQ_TYPE), | KASSERT(sq->sq_type == (flags & SLEEPQ_TYPE), | ||||
("%s: mismatch between sleep/wakeup and cv_*", __func__)); | ("%s: mismatch between sleep/wakeup and cv_*", __func__)); | ||||
/* | /* | ||||
Done Inline ActionsI would prefer to remove special case for matcher == NULL and instead provided a trivial matcher always returning true there. kib: I would prefer to remove special case for matcher == NULL and instead provided a trivial… | |||||
* Resume all blocked threads on the sleep queue. The last thread will | * Resume all blocked threads on the sleep queue. The last thread will | ||||
* be given ownership of sq and may re-enqueue itself before | * be given ownership of sq and may re-enqueue itself before | ||||
* sleepq_resume_thread() returns, so we must cache the "next" queue | * sleepq_resume_thread() returns, so we must cache the "next" queue | ||||
* item at the beginning of the final iteration. | * item at the beginning of the final iteration. | ||||
*/ | */ | ||||
wakeup_swapper = 0; | wakeup_swapper = 0; | ||||
TAILQ_FOREACH_SAFE(td, &sq->sq_blocked[queue], td_slpq, tdn) { | TAILQ_FOREACH_SAFE(td, &sq->sq_blocked[queue], td_slpq, tdn) { | ||||
thread_lock(td); | thread_lock(td); | ||||
▲ Show 20 Lines • Show All 128 Lines • ▼ Show 20 Lines | if (!TD_IS_SLEEPING(td)) | ||||
return (0); | return (0); | ||||
wchan = td->td_wchan; | wchan = td->td_wchan; | ||||
MPASS(wchan != NULL); | MPASS(wchan != NULL); | ||||
sq = sleepq_lookup(wchan); | sq = sleepq_lookup(wchan); | ||||
MPASS(sq != NULL); | MPASS(sq != NULL); | ||||
/* Thread is asleep on sleep queue sq, so wake it up. */ | /* Thread is asleep on sleep queue sq, so wake it up. */ | ||||
return (sleepq_resume_thread(sq, td, 0)); | return (sleepq_resume_thread(sq, td, 0)); | ||||
} | |||||
void | |||||
sleepq_remove_matching(bool (*matches)(struct thread *)) | |||||
{ | |||||
struct sleepqueue_chain *sc; | |||||
struct sleepqueue *sq; | |||||
struct thread *td, *td_next; | |||||
int i, wakeup_swapper; | |||||
for (sc = &sleepq_chains[0]; sc < sleepq_chains + SC_TABLESIZE; ++sc) { | |||||
if (LIST_EMPTY(&sc->sc_queues)) { | |||||
continue; | |||||
} | |||||
wakeup_swapper = 0; | |||||
mtx_lock_spin(&sc->sc_lock); | |||||
LIST_FOREACH(sq, &sc->sc_queues, sq_hash) { | |||||
for (i = 0; i < NR_SLEEPQS; ++i) { | |||||
TAILQ_FOREACH_SAFE(td, &sq->sq_blocked[i], | |||||
kibUnsubmitted Done Inline ActionsI suggest to re-factor this code fragment from sleepq_broadcast() and reuse the call. kib: I suggest to re-factor this code fragment from sleepq_broadcast() and reuse the call. | |||||
vangyzenAuthorUnsubmitted Done Inline ActionsGood idea. Done. vangyzen: Good idea. Done. | |||||
td_slpq, td_next) { | |||||
thread_lock(td); | |||||
if (matches(td)) { | |||||
wakeup_swapper |= | |||||
sleepq_resume_thread(sq, | |||||
Done Inline ActionsNo need in {}. kib: No need in {}. | |||||
Done Inline ActionsTrue, but I very strongly prefer using such braces. vangyzen: True, but I very strongly prefer using such braces. | |||||
td, 0); | |||||
} | |||||
thread_unlock(td); | |||||
} | |||||
} | |||||
} | |||||
mtx_unlock_spin(&sc->sc_lock); | |||||
if (wakeup_swapper) { | |||||
kibUnsubmitted Done Inline ActionsIt is enough to do one kick_proc0() after waking up all chains. kib: It is enough to do one kick_proc0() after waking up all chains. | |||||
vangyzenAuthorUnsubmitted Done Inline ActionsI was wondering about that. Thanks. vangyzen: I was wondering about that. Thanks. | |||||
kick_proc0(); | |||||
} | |||||
} | |||||
} | } | ||||
/* | /* | ||||
* Prints the stacks of all threads presently sleeping on wchan/queue to | * Prints the stacks of all threads presently sleeping on wchan/queue to | ||||
* the sbuf sb. Sets count_stacks_printed to the number of stacks actually | * the sbuf sb. Sets count_stacks_printed to the number of stacks actually | ||||
* printed. Typically, this will equal the number of threads sleeping on the | * printed. Typically, this will equal the number of threads sleeping on the | ||||
* queue, but may be less if sb overflowed before all stacks were printed. | * queue, but may be less if sb overflowed before all stacks were printed. | ||||
*/ | */ | ||||
▲ Show 20 Lines • Show All 322 Lines • Show Last 20 Lines |
I think this is a right place to provide some reasoning why the code correctly covers the race where tc_windup()->sleeping_on_old_rtc() missed us. Some note about sleepq lock used as interlock there is due, IMO.