In D11370#249572, @ae wrote:I also did the same test with vlans created on top of lagg with 2x25G mellanox adapters.
I didn't see measurable performance drop there. It is able to forward 14Mpps with our RX direct vlan handling patch.I think it is acceptable.
- Queries
- All Stories
- Search
- Advanced Search
- Transactions
- Transaction Logs
Feed Advanced Search
Advanced Search
Advanced Search
Aug 15 2017
Aug 15 2017
In D11370#249511, @ae wrote:I have tested your patch in our test environment against forwarding performance.
[packet generator] -> [ switch ] -> [ix.10 -> ix.100]So, the FreeBSD 12 receives tagged by vlan10 packets on ixgbe(4) and then sends them into vlan100 through the same interface.
With used traffic distribution this test machine is able to forward about 1.3Mpps with and without your patch.
Then I applied our local patch to reduce RX overhead using direct vlan handling in the ixgbe(4). With this patch the same machine is able to forward 3Mpps without packet loss. With your patch this value is lowered to 2.9Mpps. Thus the locking overhead cost is about ~100kpps.
Also I think the possible panic in the vlan_input() due to the race now fixed.
Aug 4 2017
Aug 4 2017
Selectively print "hwaddr" from ifconfig(8).
Aug 3 2017
Aug 3 2017
In D11725#245735, @ae wrote:I proposed this patch for the discussed problem:
https://lists.freebsd.org/pipermail/freebsd-net/2016-December/046650.htmlBut glebius@ said that he will have better solution after "listening sockets revamp".
In D11683#245445, @hselasky wrote:Hi,
We are currently testing this patch internally. Testing will be done by Monday. Do you mind if I push it, so I can have it my MFC queue?
--HPS
Aug 2 2017
Aug 2 2017
- add per-ring sysctl for defrag_attempts
Aug 1 2017
Aug 1 2017
Here's an older test output that shows defrag_attempts and oversized_packets:
durinf004-1: hw.mlxen1.stat.tx_oversized_packets: 0 durinf004-1: hw.mlxen1.stat.defrag_attempts: 117 durinf004-2: hw.mlxen1.stat.tx_oversized_packets: 0 durinf004-2: hw.mlxen1.stat.defrag_attempts: 1 Fri Jul 21 18:46:25 PDT 2017 durinf004-1: hw.mlxen1.stat.tx_oversized_packets: 0 durinf004-1: hw.mlxen1.stat.defrag_attempts: 121 durinf004-2: hw.mlxen1.stat.tx_oversized_packets: 0 durinf004-2: hw.mlxen1.stat.defrag_attempts: 1 durinf004-1# tail -n 10 logfile Sun Jul 23 23:00:25 PDT 2017 durinf004-1: hw.mlxen1.stat.tx_oversized_packets: 0 durinf004-1: hw.mlxen1.stat.defrag_attempts: 9306 durinf004-2: hw.mlxen1.stat.tx_oversized_packets: 0 durinf004-2: hw.mlxen1.stat.defrag_attempts: 47 Sun Jul 23 23:00:55 PDT 2017 durinf004-1: hw.mlxen1.stat.tx_oversized_packets: 0 durinf004-1: hw.mlxen1.stat.defrag_attempts: 9306 durinf004-2: hw.mlxen1.stat.tx_oversized_packets: 0 durinf004-2: hw.mlxen1.stat.defrag_attempts: 47
In D11683#244890, @hselasky wrote:Have you tested this patch?
- add per-ring sysctl node for tso_packets
In D11683#244755, @hselasky wrote:BTW: I think the same issue exists for mlx5en.
Would you like me to make a separate revision for mlx5en or combine it into this one?
- unload / free the mbuf if everything was inlined
- Forgot to add the sysctl for defrag_attempts.
Jul 31 2017
Jul 31 2017
Rebasing on HEAD.
mjoras closed D11797: Add myself to the calendar. by committing rS321804: Add myself to the calendar..
Add myself to the calendar.
Jul 29 2017
Jul 29 2017
Jul 25 2017
Jul 25 2017
In D11725#242924, @wollman wrote:I wouldn't necessarily agree that that's a poor abstraction. The other alternative would be for sonewconn(9) to pass more information back to the caller on failure, and expect the caller to print a meaningful message -- and I'd want to see some evidence that that doesn't make the common case (no failure) any slower. That would also require every caller to implement its own rate-limiting.
I should say that I personally don't think it's a bad idea to have an abstraction for getting a human-readable representation of a socket, but I figured that change might cause a bit more controversy since there's no precedent for it today,
In D11725#242911, @wollman wrote:Have to say, I cannot imagine any circumstances in which the message that's on by default would be useful when the message that's off by default isn't printed. The PCB address is useless debugging information here, whereas the 4-tuple of the dropped connection is actually meaningful -- so I'd reverse them (or better yet, figure out a way to combine the two messages while hiding the PCB address under a debug flag).
Jul 20 2017
Jul 20 2017
mjoras set the repository for D11683: Fix mlx4en(4) to properly call m_defrag. to rS FreeBSD src repository - subversion.
Add myself (mjoras) as a new src committer
Add myself and mentor line to committers-src.dot.
Wrong name :)
Jul 11 2017
Jul 11 2017
- further clarify comment
Jul 9 2017
Jul 9 2017
Jul 8 2017
Jul 8 2017
Jun 30 2017
Jun 30 2017
In D11370#236225, @mav wrote:In D11370#236121, @matt.joras_gmail.com wrote:The reasoning for keeping the counter increment under the lock is to protect against the possibility of the vlan ifnet being freed while we are touching the counter, since the vlan ifnet can't be freed while we still have the read lock.
It looks odd to me that network stack does not protect against this. What if interface decide to go away earlier, just after entering vlan_transmit? I have subtle feeling that this may only hide the problem.
Jun 29 2017
Jun 29 2017
- edit comment to properly english
In D11370#235953, @mav wrote:I have no objections. Just not sure why if_inc_counter() is repeatedly called under the lock. IIRC it is atomic by itself, and interface pointer should be valid any way.
Jun 27 2017
Jun 27 2017
Jun 26 2017
Jun 26 2017
Jan 12 2017
Jan 12 2017
- Accidentally left in internal tag.
mjoras retitled D9164: Set ifm_cur to NULL in ifmedia_removeall. from to Set ifm_cur to NULL in ifmedia_removeall..
Apr 5 2016
Apr 5 2016
In D5829#124731, @hselasky wrote:Can this be fixed in the VLAN / LAGG code instead?
In D5829#124568, @hselasky wrote:What code paths cannot sleep in the vlan filter callbacks?
This also affects mlx5en I believe. Would it be better to fix the caller than all of the clients?
With the non-sleepable rmlock(9) in both if_lagg and if_vlan, any vlan_config/unconfig event handler cannot sleep. Further, even if both of those rmlocks were changed to sleepable locks, the if_lagg vlan_config (see lagg_register_vlan) caller happens under a shared lock, not an exclusive lock, so it can never sleep per rmlock(9). This might be fixable by making the locks sleepable and making all calls to the vlan_config/unconfig event handlers under exclusive locks, but that seemed like a heavy-handed solution.
Apr 4 2016
Apr 4 2016
mjoras retitled D5829: Defer vlan register/deregister operations in mlxen. from to Defer vlan register/deregister operations in mlxen..
In D5825#124451, @hselasky wrote:Could you add a comment with a few words about the locking strategy and order?
- Fixup comments to be more explicit about the locking strategy.