Page MenuHomeFreeBSD

vtnet: fix netmap support

Authored by vmaffione on Nov 9 2018, 9:07 AM.



netmap(4) support for vtnet(4) was incomplete and had multiple bugs.
This commit fixes those bugs to bring netmap on vtnet in a functional state.


  • handle errors returned by virtqueue_enqueue() properly (they were previously ignored)
  • make sure netmap XOR rest of the kernel access each virtqueue.
  • compute the number of netmap slots for TX and RX separately, according to whether indirect descriptors are used or not for a given virtqueue.
  • make sure sglist are freed according to their type (mbufs or netmap buffers)
  • add support for mulitiqueue and netmap host (aka sw) rings.
  • intercept VQ interrupts directly instead of intercepting them in txq_eof and rxq_eof. This simplifies the code and makes it easier to make sure taskqueues are not running for a VQ while it is in netmap mode.
  • implement vntet_netmap_config() to cope with changes in the number of queues.
Test Plan

I used a FreeBSD VM with 4 vCPUs and 2GiB of memory, running on a Linux host with the QEMU/KVM hypervisor. My Linux host is an 3rd gen I7. I use the "vhost" in-kernel accelerator (a Linux technology) to speed up virtio processing in the host.

Test 1

In the VM, I run pkt-gen in tx mode on the vtnet0 interface

  1. pkt-gen -i vtnet0 -f tx

In the host, I run pkt-gen in rx mode on the TAP interface that is the backend of the VM vtnet0. I measure about 1.5 Mpps.

  1. pkt-gen -i netmap:tap1_60 -f rx

Test 2

In the VM, I run pkt-gen in rx mode on the vtnet0 interface

  1. pkt-gen -i vtnet0 -f rx

In the host, I run pkt-gen in tx mode on the TAP interface that is the backend of the VM vtnet0.

  1. pkt-gen -i netmap:tap1_60 -f tx

I measure about 2.1 Mpps.

Test 3

This test is about vtnet0 host rings.
In the VM, I disable all the offloadings (as required by netmap) and give an address:

  1. ifconfig vtnet0 up -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6

On the host, I give an address ( to the "br01" interface of the in-kernel bridge where the TAP interface of the VM is attached (so that I can ping from the host).
Then in the VM I start the netmap "bridge" program so that it forwards packets between the vtnet0 TX/RX (hardware) rings and the TX/RX host rings:

  1. bridge -w0 -i netmap:vtnet0

After I do this, I'm still able to ping from the host, which means that host rings are working.
Finally, to check that TCP is working across the host rings (e.g. checksums are computed correctly, interrupts are delivered etc.), I try netcat between the host and the VM. E.g., in the VM:

  1. nc -l 8000 # listen on port 8000

And in the host:

  1. nc 8000

and I observe that I'm able to chat between the host and the VM.

Diff Detail

rS FreeBSD src repository
Automatic diff as part of commit; lint not applicable.
Automatic diff as part of commit; unit tests not applicable.

Event Timeline

vmaffione created this revision.Nov 9 2018, 9:07 AM
vmaffione updated this revision to Diff 50194.Nov 9 2018, 9:32 AM

Add missing update to if_vtnetvar.h

gnn requested changes to this revision.Nov 10 2018, 7:49 AM

Please update the onoff variable to match how we normally express this in FreeBSD. Other than that small nit this looks fine.

88 ↗(On Diff #50194)

I'd rename onoff to state as that's a more common name for such a boolean in our kernel.

This revision now requires changes to proceed.Nov 10 2018, 7:49 AM
vmaffione updated this revision to Diff 50254.Nov 10 2018, 3:20 PM

Renamed "onoff" --> "state", as suggested by reviewers.

vmaffione marked an inline comment as done.Nov 10 2018, 3:22 PM
This comment was removed by vmaffione.
gnn accepted this revision.Nov 10 2018, 3:55 PM

Thanks. Please commit with the standard Approved by: line.

This revision is now accepted and ready to land.Nov 10 2018, 3:55 PM

Can you update this diff with the full context? Thanks.

bryanv added inline comments.Nov 10 2018, 4:48 PM
224 ↗(On Diff #50254)

The use of a shared header here (and more so on the receive side) worries me as rather brittle. Is there some existing vtnet netmap code that un-negotiates most of the offload features?

82 ↗(On Diff #50254)

Can these be wrapped in an #ifdef DEV_NETMAP?

vmaffione marked 3 inline comments as done.Nov 11 2018, 10:27 AM

I'm sorry, how can I generate full diff context with SVN? I cannot find the right command...

224 ↗(On Diff #50254)

No, renegotiation happens when the netmap application runs something like

  1. ifconfig vtnet0 -txcsum -rxcsum -tso4 -tso6 -lro -txcsum6 -rxcsum6

which is something that netmap applications normally do.

The point here is that netmap does not support offloadings. It simply ignores and strips the vtnet header (on receive), and prepends a zeroed vtnet head on transmission (for compatibility with the hypervisor). So having a shared header is important to avoid dynamic allocation or wasting space with a parallel array.
Using a different shared header for each TX/RX queue removes cache ping pong effects in case the backend processes each virtqueue using a different thread (e.g. this is the case for vhost-net on linux, or OVS-DPDK, etc.), or in case the netmap application processes each corresponding netmap ring using a different thread.

vmaffione updated this revision to Diff 50281.Nov 11 2018, 10:30 AM
vmaffione marked an inline comment as done.

Added DEV_NETMAP in if_vtnetvar.h
Provided full diff context.

This revision now requires review to proceed.Nov 11 2018, 10:30 AM
bryanv added inline comments.Nov 11 2018, 8:09 PM
299 ↗(On Diff #50281)

Either check or assert the value of err?

224 ↗(On Diff #50254)

Is there any code that disables the mergable receive buffers feature? With mergable buffers, vtnet_netmap_kring_refill() does not quite look correct because the header does not get a separate descriptor.

vmaffione updated this revision to Diff 50304.Nov 12 2018, 9:31 AM
vmaffione marked 3 inline comments as done.

Added assertions for sglist_append().
Fixed indentation.

224 ↗(On Diff #50254)

No, currently we do not disable mergeable receive buffers, but we could do that on the reinit cycle triggered by vtnet_netmap_reg().

I do not understand your observation about the separate descriptor in case of mergeable buffers, though.
First, the header does get a separate virtio descriptor, as you can see at line 299. The second descriptor (line 300) in the chain points to the netmap buffer.
Second, why would a separate descriptor be necessary? On the contrary, with mergeable buffers the guest does not know how a certain buffer passed to virtqueue_enqueue() will be used for a received packet: it may be used for the beginning of the packet, it might be in the middle, or at the end. And indeed when mergeable buffers are used, sc->vtnet_rx_nsegs==1, i.e., the driver publishes buffers one at a time, with no virtio descriptors "dedicated" to the virtio-net header.

bryanv added inline comments.Nov 12 2018, 2:20 PM
224 ↗(On Diff #50254)

With the mergable buffer feature, the host device can spread a received packet across multiple descriptors, storing the number of descriptors in the header's num_buffers. I see this being broken when sharing the header.

You might not hit this because either your host device does not support mergable buffers, or since LRO is disabled your receive buffers are always large enough so that num_buffers is always one.

vmaffione marked an inline comment as done.Nov 12 2018, 2:36 PM
vmaffione added inline comments.
224 ↗(On Diff #50254)

Absolutely. This was not under discussion. As I said, we can disable mergeable buffers in vtnet_netmap_reg(), and I will do that.
I was commenting on the need for a virtio descriptor dedicated to the virtio-net header.

Btw, my host supports mergeable buffers, but large buffers never come because in the guest I disable txcsum/rxcsum, which implies no tso, and therefore no large frames.

bryanv added inline comments.Nov 12 2018, 5:05 PM
224 ↗(On Diff #50254)

The support of mergeable buffers is determined at attach time and various pieces of the driver are configured/sized accordingly and not intended to be changed on the fly at reinit time. I think at the point of your netmap register call is too late (and would impact non-netmap users).

Ultimately I feel you're going to have to implement full receive feature support in netmap, or at least fail the netmap register when unsupported features were negotiated. Otherwise, you are risking corruption or other hard to find bugs. Also, I have support for VirtIO v1 [1] that I hope to commit soon which further complicates the Rx/Tx contract when negotiated.

I'm fine if you want to commit this since it does seem to be an improvement but I think additional work is eventually needed. After VirtIO V1 support is committed I would really like to see if we can refactor all of this so we don't have to support two Rx/Tx path implementations.

[1] -

1883 ↗(On Diff #50304)

more is not initialized at this point, and then assigned before being used later on, so is it needed here?

2531 ↗(On Diff #50304)

To generally match the style used elsewhere in this file, can you add a != 0 here?

vmaffione marked 6 inline comments as done.Nov 13 2018, 9:07 AM
vmaffione added inline comments.
224 ↗(On Diff #50254)

Yes I've noticed the pieces sized accordingly.
The netmap register is the equivalent of the IFF_UP and IFF_DOWN routines for netmap interfaces, so they can be called multiple times. It's not an "attach", so you are right.
Support for offloads/features is something that is under discussion, but it must be designed in the netmap core first, before modifying drivers.

So for now I would commit this, which is already a big improvement and does not affect non-netmap users.

1883 ↗(On Diff #50304)

The value returned in the third argument is used on Linux, and not used on FreeBSD. Something not-NULL must be provided to tell netmap that this is an RX interrupt (if NULL, it means it is a TX interrupt).
IOW, this is harmless.

2531 ↗(On Diff #50304)

Fixed here and also for the netmap_rx_irq call.

vmaffione marked 3 inline comments as done.Nov 13 2018, 9:08 AM
bryanv accepted this revision.Nov 14 2018, 2:58 PM
This revision is now accepted and ready to land.Nov 14 2018, 2:58 PM
This revision was automatically updated to reflect the committed changes.