- User Since
- May 27 2014, 9:32 AM (400 w, 5 h)
Mon, Jan 17
Wed, Jan 12
Dhould we remove the packets in those queues on vnet destruction?
Tue, Jan 11
It's great we'll get DSCP tests!
Nit: maybe it's worth trying to have it encoded with python-scapy? It has both sending and receiving support for all kind of packets.
Happy to chat more on that.
Sat, Jan 8
Wed, Jan 5
Sat, Jan 1
Wed, Dec 29
Mon, Dec 27
Regardless of what encap is (vlan or qinq) the vlan code effectively does the same - pullups the header and record the tag.
If NIC does not support QINIQ tag removal (or vlanhwtag is explicitly turned off), such frames will be dropped after the change, which is the breakage I'm talking about
Really good simplification!
Sun, Dec 26
Let me start with my view on the generic situation first.
When we teardown VNET, we need to gradually shut down all virtualized subsystems. This shutdown is by no way atomic, which means that certain subsystems shut down faster than the others.
Some are coupled either explicitly with the others (like PCBs referencing nhops and lles) or implicitly (by passing mbufs up or down the stack). Furthermore, the problem is made harder by the fact that certain entities like nhops or LLEs are epoch-protected and can extend the subsystem lifespan requirement non-deterministically. Such couplings present two contradicting requirements for a subsystem: (1) try to die early to make progress and (2) try to keep the necessary datastuctures alive as long as possible so the other subsystems don't crash. That said, having the ability to "close the incoming gate" for the subsystem is important - as it allows to wipe all state after the "gate" is closed (e.g. nothing can cause the addition of a new state) and keep the minimal amount of datastructures, that can die close to the end of teardown.
Dec 26 2021
Dec 19 2021
Dec 16 2021
Could you by any chance add the rationale to the diff description?
Dec 14 2021
Dec 13 2021
Dec 10 2021
A slightly different version landed in D30398.
Committed as 8170a7d43835047f9c1548a081eea45116473995.
Q: Speaking of 256k - do we want to create different defaults for low-memory and high-memory systems?
Landed as a375ec52a7b423133f66878ecf002efc3b6e9fca .
Dec 5 2021
Dec 4 2021
Nov 11 2021
I’d say generally we want to protect from endless loops in our networking configuration, be it lagg/bridge/vlan/gre/tun or any other logical interface moving mbufs to the next interface. It is possible to perform control plane loop-check for some combinations, but not for all. Maybe it’s worth approaching it similarly to IP - by having a TTL budget of, say, 16 hops and simply decreasing it on every *logical interface* traversal? Can be 4 bits in the mbuf or a common tag, not cleaned until the mbuf gets destroyed. Thoughts?
Nov 10 2021
Nov 4 2021
Any chance you can fill in the "testing" section?
Oct 23 2021
Oct 20 2021
Oct 13 2021
I have some changes related to linking nhops with llentries, so the datapath avoids looking up / referencing llentries for routes with gateway. Per-lle counters may be a bit hard to support in that scenario.
Are other methods of getting the counters (dtrace, ipfw counters) out of the table?
Oct 12 2021
Oct 8 2021
Oct 7 2021
Would it be possible to describe a use case for the feature?
Oct 6 2021
Oct 1 2021
Reflect glebius@ comments.
Sep 30 2021
Sep 13 2021
With default to deny it might be not exactly the desired outcome. Do you think that a separate module w/ dummynet-for-pf (for 12/13) is too much of a hassle? Especially given ae@‘s plan to change dummynet for ipfw?