Page MenuHomeFreeBSD

netgraph/ng_car: Add color marking code
ClosedPublic

Authored by donner on Oct 22 2019, 3:21 PM.
Tags
None
Referenced Files
F80163648: D22110.id63541.diff
Thu, Mar 28, 6:43 PM
Unknown Object (File)
Thu, Mar 28, 3:43 AM
Unknown Object (File)
Fri, Mar 22, 6:31 PM
Unknown Object (File)
Fri, Mar 22, 6:31 PM
Unknown Object (File)
Fri, Mar 22, 6:31 PM
Unknown Object (File)
Fri, Mar 22, 6:31 PM
Unknown Object (File)
Sun, Mar 17, 1:55 AM
Unknown Object (File)
Thu, Mar 7, 7:00 PM

Details

Summary

Chained policing should be able to reuse the classification
of traffic. A new mbuf_tag type is defined to handle gereral
QoS marking. A new subtype is defined to track the color
marking.

Test Plan

First define a simple setup between two eiface nodes.
The downstream end of the traffic is argumented by a tag node
in order to add or react on QoS tags.

ngctl -T- <<END
 mkpeer . tag t t
 name t t
 mkpeer t: eiface eth ether
 mkpeer t: car police lower
 name t:police c
 mkpeer c: eiface upper ether
 msg t: sethookin { thisHook="eth" ifNotMatch="police" }
 msg t: sethookout { thisHook="police" tag_cookie=1571268051 tag_id=23568 tag_len=4 tag_data=[2] }
 msg t: sethookin  { thisHook="police" tag_cookie=1571268051 tag_id=23568 ifNotMatch="eth" ifMatch="drop" }
 msg c: setconf { upstream={ cbs=8192 ebs=8192 cir=10240 greenAction=1 yellowAction=1 redAction=2 opt=1 } downstream={ cbs=1024 ebs=512 cir=2048 greenAction=1 yellowAction=3 redAction=2 } }
END

Upstream traffic is marked "red" unconditionally.
Upstream policy is to take care of the color and drop "red" packets.

Downstream traffic is marked "yellow" depending on a single-rate policy.
"Green" traffic is passed (without marking) and "red" traffic is dropped.

Set up some communication infrastructure by bypassing kernel structures.

ifconfig ngeth1 up
ifconfig ngeth0 up
ifconfig ngeth0 inet 192.168.123.1/24 alias
arp -s 192.168.123.2 00:01:02:03:04:05

Let's check the ARP table again.

arp -i ngeth0 -a
? (192.168.123.2) at 00:01:02:03:04:05 on ngeth0 permanent [ethernet]
? (192.168.123.1) at 00:00:00:00:00:00 on ngeth0 permanent [ethernet]

Enable sniffing on the remote end and clear the statistic counters.

tcpdump -ni ngeth1 &
+ msg c: clrstats

Now ping for a given time.

ping -t3 192.168.123.2
[no packets seen]

Of course, no packets pass, because upstream traffic is marked "red" by ng_tag
and dropped by ng_car.

Now mark upstream packets "yellow", so that they can pass the policy.

+ msg c: getclrstats
Rec'd response "getclrstats" (3) from "[f]:":
Args:   { upstream={ droped=3 red=3 } }
+ msg t: sethookout { thisHook="police" tag_cookie=1571268051 tag_id=23568 tag_len=4 tag_data=[1] }
ping -t3 192.168.123.2
15:26:41.287662 IP 192.168.123.1 > 192.168.123.2: ICMP echo request, id 54114, seq 0, length 64
15:26:42.306261 IP 192.168.123.1 > 192.168.123.2: ICMP echo request, id 54114, seq 1, length 64
15:26:43.379131 IP 192.168.123.1 > 192.168.123.2: ICMP echo request, id 54114, seq 2, length 64
+ msg c: getclrstats
Rec'd response "getclrstats" (3) from "[f]:":
Args:   { upstream={ passed=3 yellow=3 } }

This result matchs the expectations.
So kill this sniffing.

kill %1

For the downstream traffic set up a different testbed and validate it.

ifconfig ngeth1 inet 192.168.124.2/24 alias
arp -s 192.168.124.1 01:02:03:04:05:06
arp -i ngeth1 -a
? (192.168.124.1) at 01:02:03:04:05:06 on ngeth1 permanent [ethernet]
? (192.168.124.2) at 00:00:00:00:00:00 on ngeth1 permanent [ethernet]

Clear statistics, add sniffing and give it a try.

tcpdump -ni ngeth0 &
+ msg c: clrstats 
ping -t3 192.168.124.1
15:34:26.502044 IP 192.168.124.2 > 192.168.124.1: ICMP echo request, id 2403, seq 0, length 64
15:34:27.562498 IP 192.168.124.2 > 192.168.124.1: ICMP echo request, id 2403, seq 1, length 64
15:34:28.607083 IP 192.168.124.2 > 192.168.124.1: ICMP echo request, id 2403, seq 2, length 64
+ msg c: getclrstats
Rec'd response "getclrstats" (3) from "[f]:":
Args:   { downstream={ passed=3 green=3 } }

Low rate packets pass the policy as "green". They are not marked and the
ng_tag node pass them to the sniffer at the eiface node.

In order to stress the policer, let only the "yellow" packets pass.
Because only the "yellow" action is marking, the packets are easy to match.

+ msg t: sethookin  { thisHook="police" tag_cookie=1571268051 tag_id=23568 ifMatch="eth" ifNotMatch="drop" }
ping -f -t3 192.168.124.1
..................15:57:15.308202 IP 192.168.124.2 > 192.168.124.1: ICMP echo request, id 2207, seq 17, length 64
.15:57:15.319225 IP 192.168.124.2 > 192.168.124.1: ICMP echo request, id 2207, seq 18, length 64
.15:57:15.330332 IP 192.168.124.2 > 192.168.124.1: ICMP echo request, id 2207, seq 19, length 64
.15:57:15.341080 IP 192.168.124.2 > 192.168.124.1: ICMP echo request, id 2207, seq 20, length 64
.15:57:15.353153 IP 192.168.124.2 > 192.168.124.1: ICMP echo request, id 2207, seq 21, length 64
...................................................................................................................................................................................................................................................................
--- 192.168.124.1 ping statistics ---
281 packets transmitted, 0 packets received, 100.0% packet loss

Five packets pass while the majority of packets are dropped.
Let's check the counters.

+ msg c: getclrstats
Rec'd response "getclrstats" (3) from "[f]:":
Args:   { downstream={ passed=28 droped=253 green=23 yellow=5 red=253 } }

The first 23 packets are "green", not marked and dropped by the tag node.
The next five packets are "yellow", marked and handed to the sniffer.
The remaining packets are "red" and dropped by the car node itself.

Diff Detail

Repository
rS FreeBSD src repository - subversion
Lint
No Lint Coverage
Unit
No Test Coverage
Build Status
Buildable 27151
Build 25425: arc lint + arc unit

Event Timeline

bcr added a subscriber: bcr.

OK from manpages.

melifaro added inline comments.
sys/netgraph/ng_car.c
280

Can this be an inline function?

288

Do we need to alloc tag for the default color?

sys/netgraph/qos.h
47

Handling qos properly is something that has a lot of room for improvement in the networking stack.

Struct m_tag is 24 bytes (amd64) and most of the commonly-used color/priority fields are less then 8 bits.
I'm wondering what's the reasoning on using multiple different mbuf tags on passing DS data, instead of a single one.

82

Any reason to limit the value to 3 bits given DSCP values are 6 bits?

donner added inline comments.
sys/netgraph/ng_car.c
280

That would be a good idea, but will require considerable refactoring of the code. Due to access various locally scoped variables, the interface for such a function is much larger and harder to extend, than the current macro.

I'd like to keep the patch small at this stage.

May I keep the macro?

288

Yes the mark action should always generate a tag. The whole idea of marking is to make the decision externally visible to other consumers. So the defaults of this module need to be made explicit.

Yes, there are other types of customers (like ng_tag) in our current setup.

sys/netgraph/qos.h
47

Extensiblity. Each time, a change to a given field in a complex tag needs to be made, all the ABI cookies need to change, Having multiple cookies for different tasks, simplifies the development and operational handing considerably. Mbuf_tags are handled by operators using ng_tag scripts. So changing the ABI requires to change the cookie and in consequence the scripts on production machines. I'd like to avoid as much of this trouble to operators.

Regarding the memory size, Yes, that's a valid argument. Could it be solved by introducing a mbuf_tag_bitvalues type which collects as much as single bit values into a globally managed "flag" tag?

In most cases there is no difference in memory overhead, because a packet will have only one or two tags attached.

82

Typical hardware has a very limited number of QoS queues, so DCSP or similar are mapped to such a limited number of QoS profiles. In netgraph networks, queues are implemented by ng_tag switches to different paths, which is a cumbersome task.

Tagging traffic on typically hardware (i.e. Cisco) has much more values than DCSP can express, so an other mapping is needed. OTOH practical DCSP values on interconnects are limited as they reflect the TOS bits.

But the real reason is, that priority is a field from 802.1p where most hardware QoS is done.

I have no problem in increasing this value and handle the details to the script maintainer of the production environment.

donner marked an inline comment as done.

Updated to revision 358668.

Widen the range of priority classes.

kp added a subscriber: kp.

Approved by: kp (mentor)

This revision is now accepted and ready to land.Jan 27 2021, 7:55 PM
  • bump man page date
  • rebase to current main
This revision now requires review to proceed.Jan 27 2021, 8:19 PM
This revision was not accepted when it landed; it landed in state Needs Review.Jan 27 2021, 8:30 PM
This revision was automatically updated to reflect the committed changes.