I want to introduce a new bhyve network backend that allow to connect VM to netgraph(4) mesh.
The backend uses socket API with PF_NETGRAPH protocol family which is provided by the ng_socket(4).
The main motivation:
1. Netgraph is the well-known mature FreeBSD feature for creating flexible networks.
2. Netgraph already has several useful modules: L2 bridge, vlan, nat, ipfw integration, pseudo-interface (ng_eiface (4)), QoS, etc.
3. The Socket API looks more flexible and has good performance.
4. Bhyve allowed to run in a Jail. Netgraph is also virtualized through VNET, so it might be interesting to run a group of virtual machines inside Jail.
Some notes:
The default socket buffer is too small (net.graph.recvspace: 20480 and net.graph.maxdgram: 20480). To achieve good performance, you may need to increase the value of kern.ipc.maxsockbuf. In my tests the optimal value is ~4 Mbytes.
With this value on XEON v4, I achieved the following results:
iperf3 5-6 GBit/s: VM (mtu == 1500) - ng_bridge - VM (mtu == 1500)
iperf3 11-12 GBit/s: VM (mtu == 9000) - ng_bridge - VM (mtu == 9000)
iperf3 22 GBit/s: VM (mtu == 64K) - ng_bridge - VM (mtu == 64K) - this is just for testing troughput if we enabled virtio-net TSO.
To use new backend:
```
-s X:Y:Z,[virtio-net,e1000],netgraph:socket=[ng_socket name]:path=[destination node]:hook=[our socket src hook]:peerhook=[dst node hook]
```
with ng_bridge:
```
-s X:Y:Z,[virtio-net,e1000],netgraph:socket=vmX:path=vmbridge:hook=vmlink:peerhook=link0
```
or a short version:
```
-s X:Y:Z,[virtio-net,e1000],netgraph:path=vmbridge:peerhook=link0
```
To connect VM directly to the network interface:
```
# kldload ng_ether
# bhyve ... -s 5,[virtio-net,e1000],netgraph:path=ix0:peerhook=lower
```