Page MenuHomeFreeBSD

ng_eiface(4): Increase default TX queue size to 4096 items.
AbandonedPublic

Authored by afedorov on Mar 31 2023, 5:50 PM.
Tags
None
Referenced Files
Unknown Object (File)
Nov 8 2023, 12:59 AM
Unknown Object (File)
Oct 6 2023, 11:54 PM
Unknown Object (File)
Sep 30 2023, 2:55 PM
Unknown Object (File)
Aug 6 2023, 3:39 PM
Unknown Object (File)
Jun 19 2023, 8:32 PM
Unknown Object (File)
May 22 2023, 2:27 PM
Unknown Object (File)
May 3 2023, 2:28 AM
Unknown Object (File)
Mar 31 2023, 5:50 PM
Subscribers

Details

Reviewers
vmaffione
jhb
glebius
zlei
Group Reviewers
network
Summary

ng_eiface(4) sets the send queue size to the "sysctl net.link.ifqmaxlen".
By default, this is too small for a modern network (only 50 packets).
So set it to 4096 like if_epair(4) does.

The main problem is that some users are unhappy with the default performance of ng_eiface(4).

P/S. Why is "net.link.ifqmaxlen" so low?

Test Plan

Before:

root@xenon:/datapool/projects/freebsd-upstream # ngctl mkpeer eiface link ether                                                                                                                                                               
root@xenon:/datapool/projects/freebsd-upstream # ngctl mkpeer eiface link ether                                                                                                                                                               
root@xenon:/datapool/projects/freebsd-upstream # ifconfig                                                                                                                                                                                     
re0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500                                                                                                                                                                     
        options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE>
        ether b4:2e:99:f2:ad:09
        inet 192.168.1.72 netmask 0xffffff00 broadcast 192.168.1.255
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active                                     
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>

re0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=8209b<RXCSUM,TXCSUM,VLAN_MTU,VLAN_HWTAGGING,VLAN_HWCSUM,WOL_MAGIC,LINKSTATE>
        ether b4:2e:99:f2:ad:09
        inet 192.168.1.72 netmask 0xffffff00 broadcast 192.168.1.255
        media: Ethernet autoselect (1000baseT <full-duplex>)
        status: active
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> metric 0 mtu 16384
        options=680003<RXCSUM,TXCSUM,LINKSTATE,RXCSUM_IPV6,TXCSUM_IPV6>
        inet6 ::1 prefixlen 128
        inet6 fe80::1%lo0 prefixlen 64 scopeid 0x2
        inet 127.0.0.1 netmask 0xff000000
        groups: lo
        nd6 options=21<PERFORMNUD,AUTO_LINKLOCAL>
ngeth0: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=28<VLAN_MTU,JUMBO_MTU>
        ether 58:9c:fc:10:ff:e9
        media: Ethernet autoselect
        status: no carrier
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
ngeth1: flags=8802<BROADCAST,SIMPLEX,MULTICAST> metric 0 mtu 1500
        options=28<VLAN_MTU,JUMBO_MTU>
        ether 58:9c:fc:10:d1:72
        media: Ethernet autoselect
        status: no carrier
        nd6 options=29<PERFORMNUD,IFDISABLED,AUTO_LINKLOCAL>
root@xenon:/datapool/projects/freebsd-upstream # jail -c name=j1 vnet persist
root@xenon:/datapool/projects/freebsd-upstream # ngctl connect ngeth0: ngeth1: ether ether
root@xenon:/datapool/projects/freebsd-upstream # ifconfig ngeth1 vnet j1
root@xenon:/datapool/projects/freebsd-upstream # ifconfig ngeth0 172.20.176.1/24 up
root@xenon:/datapool/projects/freebsd-upstream # jexec j1 ifconfig ngeth1 172.20.176.2/24 up
root@xenon:/datapool/projects/freebsd-upstream # ping 172.20.176.2
PING 172.20.176.2 (172.20.176.2): 56 data bytes
64 bytes from 172.20.176.2: icmp_seq=0 ttl=64 time=0.058 ms
64 bytes from 172.20.176.2: icmp_seq=1 ttl=64 time=0.069 ms
^C
--- 172.20.176.2 ping statistics ---
2 packets transmitted, 2 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.058/0.064/0.069/0.005 ms
root@xenon:/datapool/projects/freebsd-upstream # iperf3 -c 172.20.176.2
Connecting to host 172.20.176.2, port 5201
[  5] local 172.20.176.1 port 51394 connected to 172.20.176.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   717 MBytes  6.01 Gbits/sec  9448   15.7 KBytes       
[  5]   1.00-2.00   sec   715 MBytes  5.99 Gbits/sec  9573   66.8 KBytes       
[  5]   2.00-3.00   sec   716 MBytes  6.01 Gbits/sec  9496   65.3 KBytes       
[  5]   3.00-4.00   sec   711 MBytes  5.97 Gbits/sec  9667   66.8 KBytes       
[  5]   4.00-5.00   sec   716 MBytes  6.00 Gbits/sec  9329   15.7 KBytes       
[  5]   5.00-6.00   sec   716 MBytes  6.01 Gbits/sec  9589   66.8 KBytes       
[  5]   6.00-7.00   sec   715 MBytes  6.00 Gbits/sec  9485   74.1 KBytes       
[  5]   7.00-8.00   sec   718 MBytes  6.02 Gbits/sec  9718   12.8 KBytes       
[  5]   8.00-9.00   sec   716 MBytes  6.01 Gbits/sec  9402   15.7 KBytes       
[  5]   9.00-10.00  sec   716 MBytes  6.01 Gbits/sec  9682   18.5 KBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  6.99 GBytes  6.00 Gbits/sec  95389             sender
[  5]   0.00-10.00  sec  6.99 GBytes  6.00 Gbits/sec                  receiver

iperf Done.

After:

root@xenon:/datapool/projects/freebsd-upstream # ngctl mkpeer eiface link ether
root@xenon:/datapool/projects/freebsd-upstream # ngctl mkpeer eiface link ether
root@xenon:/datapool/projects/freebsd-upstream # jexec j1 ifconfig ngeth1 mtu 9000
root@xenon:/datapool/projects/freebsd-upstream # jail -c name=j1 vnet persist
root@xenon:/datapool/projects/freebsd-upstream # ifconfig ngeth0 172.20.176.1/24 up
root@xenon:/datapool/projects/freebsd-upstream # ifconfig ngeth1 vnet j1
root@xenon:/datapool/projects/freebsd-upstream # ping 172.20.176.2
PING 172.20.176.2 (172.20.176.2): 56 data bytes
^C
--- 172.20.176.2 ping statistics ---
2 packets transmitted, 0 packets received, 100.0% packet loss
root@xenon:/datapool/projects/freebsd-upstream # ngctl connect ngeth0: ngeth1: ether ether
root@xenon:/datapool/projects/freebsd-upstream # ping 172.20.176.2
PING 172.20.176.2 (172.20.176.2): 56 data bytes
64 bytes from 172.20.176.2: icmp_seq=0 ttl=64 time=0.066 ms
^C
--- 172.20.176.2 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 0.066/0.066/0.066/0.000 ms
root@xenon:/datapool/projects/freebsd-upstream # iperf3 -c 172.20.176.2 
Connecting to host 172.20.176.2, port 5201
[  5] local 172.20.176.1 port 40303 connected to 172.20.176.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  1016 MBytes  8.52 Gbits/sec    0   2.00 MBytes       
[  5]   1.00-2.00   sec  1.01 GBytes  8.66 Gbits/sec    0   2.00 MBytes       
[  5]   2.00-3.00   sec   998 MBytes  8.37 Gbits/sec    0   2.00 MBytes       
[  5]   3.00-4.00   sec  1.00 GBytes  8.62 Gbits/sec    0   2.00 MBytes       
[  5]   4.00-5.00   sec  1.01 GBytes  8.70 Gbits/sec    0   2.00 MBytes       
[  5]   5.00-6.00   sec  1022 MBytes  8.58 Gbits/sec    0   2.00 MBytes       
[  5]   6.00-7.00   sec  1.01 GBytes  8.63 Gbits/sec    0   2.00 MBytes       
[  5]   7.00-8.00   sec  1003 MBytes  8.41 Gbits/sec    0   2.00 MBytes       
[  5]   8.00-9.00   sec  1.01 GBytes  8.65 Gbits/sec    0   2.00 MBytes       
[  5]   9.00-10.00  sec  1.01 GBytes  8.70 Gbits/sec    0   2.00 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  9.99 GBytes  8.58 Gbits/sec    0             sender
[  5]   0.00-10.00  sec  9.99 GBytes  8.58 Gbits/sec                  receiver

iperf Done.
root@xenon:/datapool/projects/freebsd-upstream #

Diff Detail

Lint
Lint Skipped
Unit
Tests Skipped

Event Timeline

Thanks for the benchmark!

Why is "net.link.ifqmaxlen" so low?

The net.link.ifqmaxlen sysctl knob is loader tunable, many drivers init qlen with it. It is the default one.

For most cases, the machine do not have many interfaces, so a global default tunable is adequate.
In case increasing this global tunable has side effect for some driver (USB ethernet maybe), or some driver want its own ifqmaxlen tunable, then I'd suggest for example a net.link.netgraph.ether.ifqmaxlen knob, rather than a hard coded one.

That can also apply to epair(4), if @kp prefer.

A fixed ifqmaxlen does not fit all situation although I can not tell if a smaller one will win in some certain situation.

I don't like this approach. We should stop using IF_QUEUE altogether.