In theory, TCP CUBIC keeps a TCP-friendly window that guarantees an inline performance with the standard AIMD TCP congestion control (TCP-SACK, TCP NewReno) on the bottom line. It means CUBIC should have the same performance with NewReno in the worst case. This is in theory covered in the mathematic model of Eq. 4 in RFC8312 `Section 4.2. TCP-Friendly Region`. However, the equation requires a reliable and accurate timer to help approximate the TCP-friendly Window with these unified CUBIC `alpha/beta` parameters. When the timer is not helpful, for example kern.hz==100 on some Virtual Machine platforms, the performance suffers due to loose precision on TCP-friendly window approximation in LAN. In WAN with much higher latency, this seems acceptable because the 10ms granularity does not crush the dominant CUBIC-window (not TCP-friendly window) estimation.
Secondly, find an accurate and reliable timer([[ https://people.freebsd.org/~davide/asia/naplesnew.pdf | timers ]]) is a challenge if we pursue a general solution across all environments. Therefore, re-use the NewReno AIMD functionalities to accurately get a TCP-friendly window seems to be efficient and economical.
Pros:
(1) it calculates exact TCP-friendly window (NewReno window) in congestion avoidance without losing precision.
(2) it boosts performance in LAN where packet loss is serious.
(3) it does not require an accurate timer, which means less dependent on ticks (expensive kern.hz on different platforms).
(4) it has little impact on traffic in WAN where RTT > 10ms and CUBIC's concave/convex approach dominates.
Cons:
not RFC8312 compliant to the Eq. 4 in `Section 4.2. TCP-Friendly Region`
> W_est(t) = W_max*beta_cubic + [3*(1-beta_cubic)/(1+beta_cubic)] * (t/RTT) (Eq. 4)
This change shows better performance than a finer timer approach. For example, we can compare below with the results from 10k or 100k **kern.hz** value in [[ https://wiki.freebsd.org/chengcui/testTCPCCinVM#test_result | testTCPCCinVM#test_result ]].
| kern.hz value | TCP congestion control algo | iperf result from 1% data packet drop rate | iperf result from 2% data packet drop rate |
| 100000 (100k) | stock cubic | 719 Mbits/sec | 547 Mbits/sec |
| 10000 (10k) | stock cubic | 923 Mbits/sec | 683 Mbits/sec |
| 1000 (1k) | stock cubic | 917 Mbits/sec | 527 Mbits/sec |
| 100 | stock newreno | 915 Mbits/sec | 767 Mbits/sec |
| 100 | stock cubic | **168** Mbits/sec | **83.8** Mbits/sec |
| 100 | this cubic patch | **921** Mbits/sec (+4.5X) | **876** Mbits/sec (+9.5X) |