Add jitter to the ICMP bandwidth limit
I'm not sure recalculating the jitter value on each badport_bandlim call gives a desirable distribution. If my math is right it seems we'd transmit packets with 100% probability to (V_icmplim - V_icmplim_jitter), declining to 0% at (V_icmplim + V_icmplim_jitter).
How frequently is badport_bandlim() called, and do the jitter numbers need to be unpredictable? If a non-cryptographic PRNG would suffice, prng32_bounded() might be a suitable replacement with less CPU use.
|130 ↗||(On Diff #79996)|
I would call this inc rather than jitter, since it's the increment being added.
We probably want this to be 3 (so ICMPs are counted as 0, 1, or 2 units towards the limit). Making it 16 would dramatically cut the number of packets which get out before we hit the limit.
Forgive my (carefully cultivated) ignorance of the network stack, but I'd like to understand this:
- We already have a bandwidth limit for outgoing ICMP packets
- We already have a packets-per-second limit for outgoing ICMP packets
- Outgoing ICMP packets which exceed either the bandwidth or packets-per-second limit are dropped and never sent
- This change jitters the packets-per-second limit
Is that correct?
Correct. A constant rate limit allows the attackers to infer which source port the request is coming from reducing its randomness from 32 bits to the original 16 bits. (Same mistake WPA made with WPS.)
Will this not make the reported message incorrect? In the worst case (assume inc was 2 every time and modulo off-by-one) we could start limiting at 100 packets sent, but report limiting from 200 to 200.