Page MenuHomeFreeBSD

xen/intr: fix the event channel enabled per-cpu mask
ClosedPublic

Authored by royger on Nov 4 2015, 3:37 PM.
Tags
None
Referenced Files
Unknown Object (File)
Fri, May 10, 12:31 PM
Unknown Object (File)
Fri, May 10, 12:27 PM
Unknown Object (File)
Sat, Apr 27, 7:04 AM
Unknown Object (File)
Sat, Apr 27, 1:07 AM
Unknown Object (File)
Fri, Apr 26, 5:52 AM
Unknown Object (File)
Fri, Apr 26, 5:52 AM
Unknown Object (File)
Fri, Apr 26, 5:52 AM
Unknown Object (File)
Fri, Apr 26, 5:52 AM
Subscribers

Details

Summary

BITSET cannot be used to store the per-cpu event channel masks because its
underlying storage is an array of longs. This is fine on x86, but on ARM the
event channel pending and masked arrays shared with the hypervisor is an
uint64_t regardless of the bitness of the guest. Switch back to using
xen_ulong_t, which will always match the type of the arrays shared with the
hypervisor.

For ARM{32/64} you will have to introduce a set of xen_{set/test/clean}_bit
functions, that will always use atomic_{...}_64 (because on ARM xen_ulong_t
is always 64bits).

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

royger retitled this revision from to xen/intr: fix the event channel enabled per-cpu mask.
royger updated this object.
royger edited the test plan for this revision. (Show Details)
royger added a reviewer: julien.grall_citrix.com.

TBH, I would have prefer to see a partial revert of the offending commit rather than mixing with logical changes like s/long/xen_ulong/.

The previous code never used xen_ulong_t but long. If you really want to use xen_ulong_t, there is other place to fix it (see comment inline).
FIY, I've got a patch to fix all the s/long/xen_ulong_t/ for the event channel. I haven't yet sent it because I want to send a big batch of changes for the event channel code.

Furthermore, the usage of atomic_64 on ARM will be more expensive than the 32-bit version. Mainly because we have to store 2 32-bit value rather than one. So it make more sense to keep xen_* bitops using long.

sys/x86/xen/xen_intr.c
107 ↗(On Diff #9945)

The 2 lines above should be fixed to support xen_ulong_t

190 ↗(On Diff #9945)

This sounds like a collateral damage of the xen_*_ changes. 64-bit atomic on ARM32 will be slower than 32-bit atomic access.

However xen_intr_pirq_eoi_map is only x86 specific. So I guess we don't much care.

493 ↗(On Diff #9945)

static inline xen_ulong_t

516 ↗(On Diff #9945)

xen_ulong_t

521 ↗(On Diff #9945)

xen_ulong_t

547 ↗(On Diff #9945)

This call need to be fixed, maybe by introducing a macro atomic_readandclear_xen_ulong ?

646 ↗(On Diff #9945)

This needs to be fixed, maybe by introducing atomic_store_rel_xen_ulong?

royger edited edge metadata.

Revert the evtchn_enabled changes but keep the usage of the
xen_{set/test/clear}_bit.

This revision was automatically updated to reflect the committed changes.