Page MenuHomeFreeBSD

Hyperv: vmbus: Use INTR_MPSAFE in swi_add and bind swi thread to each CPU.
AbandonedPublic

Authored by honzhan_microsoft.com on Jan 6 2016, 2:42 AM.
Tags
None
Referenced Files
Unknown Object (File)
Jan 12 2024, 12:20 AM
Unknown Object (File)
Jan 9 2024, 9:20 PM
Unknown Object (File)
Dec 23 2023, 9:07 AM
Unknown Object (File)
Nov 9 2023, 5:47 AM
Unknown Object (File)
Nov 7 2023, 8:41 AM
Unknown Object (File)
Oct 31 2023, 7:23 AM
Unknown Object (File)
Oct 8 2023, 4:45 AM
Unknown Object (File)
Oct 6 2023, 7:37 AM
Subscribers

Details

Summary

Software interrupt thread for msg and event handler should be binded to each CPU,
and since it is not required to sync between those threads, INTR_MPSAFE is the
correct flag.

Submitted by: Hongjiang Zhang <honzhan microsoft com>

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Passed
Unit
No Test Coverage
Build Status
Buildable 2281
Build 2290: arc lint + arc unit

Event Timeline

honzhan_microsoft.com retitled this revision from to Hyperv: vmbus: Use INTR_MPSAFE in swi_add and bind swi thread to each CPU..
honzhan_microsoft.com updated this object.
sys/dev/hyperv/vmbus/hv_vmbus_drv_freebsd.c
524–525

I suggest we remove this per-CPU swi since message is not performance critical.

sys/dev/hyperv/vmbus/hv_vmbus_drv_freebsd.c
524–525

I discussed this with Jun and we found we have to keep the per-cpu swi messaage thread: the message must to be handled on the cpu where the interrupt is received and we can only make sure this by keeping the per-cpu swi thread.

BTW, actually the "intr_event_bind(hv_vmbus_g_context.hv_msg_intr_event[j], j)" is unnecessary, however we can leave it as it is.

BTW2, actually there is no need of INTR_MPSAFE for the msg swi thread, because the (non-timer) messages can only be delivered to one CPU, i.e., normally it's CPU0 only.

sys/dev/hyperv/vmbus/hv_vmbus_drv_freebsd.c
524–525

the message only delivers to one CPU doesn't mean we don't need INTR_MPSAFE. it means we can give INTR_MPSAFE without doing any locking inside the swi handler. Grab Gaint when running swi handler can cause other lock contention that need Gaint lock in other subsystem other than the swi handler in hyperv driver.

royger edited edge metadata.

LGTM.

This revision is now accepted and ready to land.Jan 15 2016, 5:23 PM
decui_microsoft.com edited edge metadata.

LGTM

sys/dev/hyperv/vmbus/hv_vmbus_drv_freebsd.c
524–525

I agree.

I find another way to remove SWI. We can add "struct task" for each CPU, and schedule a task in hard interrupt handler via taskqueue_enqueue(taskqueue_fast, task[curcpu]). Then we don't need spawn a thread for each CPU. In such way, the code logic is actual more simple.

I find another way to remove SWI. We can add "struct task" for each CPU, and schedule a task in hard interrupt handler via taskqueue_enqueue(taskqueue_fast, task[curcpu]). Then we don't need spawn a thread for each CPU. In such way, the code logic is actual more simple.

Using per-cpu taskq thread is used to avoid 'remote schedule', i.e. ipi.

By reusing taskqueue_fast, you end up w/ only one event handler thread; that's obviously not what we want for the upcoming vRSS support.

howard0su_gmail.com edited edge metadata.

I find another way to remove SWI. We can add "struct task" for each CPU, and schedule a task in hard interrupt handler via taskqueue_enqueue(taskqueue_fast, task[curcpu]). Then we don't need spawn a thread for each CPU. In such way, the code logic is actual more simple.

Using per-cpu taskq thread is used to avoid 'remote schedule', i.e. ipi.

By reusing taskqueue_fast, you end up w/ only one event handler thread; that's obviously not what we want for the upcoming vRSS support.

The change is for message which is mainly the commands to setup channel, remove channel, etc. they are not performance critical like event handling. Or they are not related to NIC performance.

This revision now requires changes to proceed.Jan 17 2016, 1:49 PM

I find another way to remove SWI. We can add "struct task" for each CPU, and schedule a task in hard interrupt handler via taskqueue_enqueue(taskqueue_fast, task[curcpu]). Then we don't need spawn a thread for each CPU. In such way, the code logic is actual more simple.

Using per-cpu taskq thread is used to avoid 'remote schedule', i.e. ipi.

By reusing taskqueue_fast, you end up w/ only one event handler thread; that's obviously not what we want for the upcoming vRSS support.

The change is for message which is mainly the commands to setup channel, remove channel, etc. they are not performance critical like event handling. Or they are not related to NIC performance.

OK, I thought you meant to use taskq_fast for both events and msgs.

honzhan_microsoft.com edited edge metadata.
  • hyperv: vmbus: refactor msg handler by replacing swi with task queue