Page MenuHomeFreeBSD

bhyve: fix reporting of virtio scsi seg_max
ClosedPublic

Authored by vangyzen on Jun 5 2019, 9:43 PM.
Tags
None
Referenced Files
Unknown Object (File)
Feb 21 2024, 3:27 AM
Unknown Object (File)
Feb 17 2024, 5:55 PM
Unknown Object (File)
Feb 13 2024, 12:31 PM
Unknown Object (File)
Dec 23 2023, 8:22 AM
Unknown Object (File)
Dec 20 2023, 2:47 AM
Unknown Object (File)
Dec 15 2023, 12:47 AM
Unknown Object (File)
Dec 13 2023, 12:32 AM
Unknown Object (File)
Nov 29 2023, 7:48 PM

Details

Summary

The seg_max value reported to the guest should be two less than the host's maximum, in order to leave room for the request and the response.

We hit the "too many segments to enqueue" assertion on OneFS because we increased MAXPHYS to 256 KiB.

Test Plan

OneFS now boots in bhyve, where it always panicked during boot before this change. FreeBSD still works, as before.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
Lint Not Applicable
Unit
Tests Not Applicable

Event Timeline

This revision is now accepted and ready to land.Jun 5 2019, 9:49 PM

What is the panic message?

I believe seg_max is the number of segments in a SCSI command while vtscsi_maximum_segments computes the maximum scatter-gather entries in a request enqueued to the virtqueue. The VTSCSI_MIN_SEGMENTS accounts for the SG entries needed for the request and response (virtio_scsi_cmd_{req,resp}). The return value is also used to configure the CAM maxio size in vtscsi_cam_path_inquiry. If nothing else, this change is incomplete because a small enough seg_max value will cause a bogus maxio to be calculated.

Does booting your increased MAXPHYS kernel on QEMU/KVM also panic?

If you change the config seg_max in bhyve to be VTSCSI_MAXSEG - 2, does that fix the issue?

What is the panic message?

"Something something 65 64/0"

Does booting your increased MAXPHYS kernel on QEMU/KVM also panic?

No -- but I believe this is only because QEMU's default number of segments is 128, while Bhyve's is 64. (256 kB MAXPHYS / PAGE_SIZE => 64 segments.)

I believe seg_max is the number of segments in a SCSI command while...

Oh! Thanks for that explanation.

If you change the config seg_max in bhyve to be VTSCSI_MAXSEG - 2, does that fix the issue?

I'll try that now.

What is the panic message?

"Something something 65 64/0"

Essentially, yeah: virtqueue_enqueue: vtscsi0 request - too many segments to enqueue: 65, 64/0

Essentially, yeah: virtqueue_enqueue: vtscsi0 request - too many segments to enqueue: 65, 64/0

Ah, makes sense: bhyve vtscsi sets the virtqueue size to be the same as the maximum segment size (and I guess indirect descriptors aren't negotiated). I suppose there could be a refactor in the driver to further cap the seg_max by the queue size (or fail to attach since it is a device bug), although this device has multiple virtqueues so that might artificially limit it. IIRC there is some verbiage in the spec that basically says "don't be stupid around virtqueue descriptor sizes". A quick look at the NetBSD, OpenBSD, and Linux VirtIO SCSI drivers and all of them honor seg_max as is. So this is probably only worth fixing in bhyve, perhaps with a comment or compile-time assert.

This revision now requires review to proceed.Jun 6 2019, 1:30 PM

If you change the config seg_max in bhyve to be VTSCSI_MAXSEG - 2, does that fix the issue?

I'll try that now.

That worked.

Thanks for the further explanation. I'll try to word that into a comment later this morning, since a bare foo - 2 clearly needs one.

Fix it in bhyve instead

Wait, I think we should still have a seat belt in guests. Eg it is much easier to update a single guest than a hypervisor host. And does it make sense to limit to sub-256kB ios when linux qemu allows 512kB?

Wait, I think we should still have a seat belt in guests.

I agree, especially for Isilon's usage. I've been distracted most of the morning. I'll look at which seatbelt makes the most sense.

And does it make sense to limit to sub-256kB ios when linux qemu allows 512kB?

I don't know of a reason, but I'm not familiar with this area. @bryanv?

Please document why the -2 is needed per the earlier discussion. Does this need urgent attention to get in 11.3?

So adding the -2 seems very similar to the change made to virtio-block in https://svnweb.freebsd.org/base?view=revision&revision=347033 which also has a -2 in its seg_max.

usr.sbin/bhyve/pci_virtio_scsi.c
68 ↗(On Diff #58306)

I wonder if this should be changed to 128 similar to https://svnweb.freebsd.org/base?view=revision&revision=347033

Does this need urgent attention to get in 11.3?

In my opinion, no. To my knowledge, I'm the only person to report this, and I only hit it by running OneFS under bhyve, which uses a larger MAXPHYS.

In my opinion, no. To my knowledge, I'm the only person to report this, and I only hit it by running OneFS under bhyve, which uses a larger MAXPHYS.

Note that there are other FreeBSD consumers who use large MAXPHYS (even larger than 256k), although I don't know if they run as Bhyve guests (Netflix, some tape filesystem folks, ...) (probably not).

And does it make sense to limit to sub-256kB ios when linux qemu allows 512kB?

I don't know of a reason, but I'm not familiar with this area. @bryanv?

I'm not very familiar with bhyve VirtIO SCSI so not sure if was arbitrary or because of a CTL limit.

vangyzen retitled this revision from Fix calculation of vtscsi_max_nsegs to bhyve: fix reporting of virtio scsi seg_max.Jun 19 2019, 9:42 PM
vangyzen edited the summary of this revision. (Show Details)
vangyzen edited the test plan for this revision. (Show Details)
usr.sbin/bhyve/pci_virtio_scsi.c
68 ↗(On Diff #58306)

Perhaps. I'll set up Linux and Windows guests to test. If it works, I'll commit it separately from these changes.

This revision is now accepted and ready to land.Jun 20 2019, 12:09 AM
This revision was automatically updated to reflect the committed changes.