This is a fantastic article, very readable if you know TCP.

This is a fantastic article, very readable if you know TCP. It delves into the latency vs. bandwidth issues very thoroughly and describes a new control algorithm that optimizes both without needing to be deployed everywhere to work.

My very favorite part is the explanation of their “uncertainty principle” — you can’t simultaneously measure ping time and bandwidth for a connection (RTT and bottleneck bandwidth really) because to measure maximum bandwidth you need to saturate the pipe which increases latency due to queuing and to measure minimum latency you need an empty queue because any non-empty queues increase the RTT (= round-trip time)

So, the BBR algorithm cycles through RTT-seeking, Bandwidth-seeking modes, and Normal modes. In RTT mode, it deliberately rate-limits the sender to let any queues drain a bit while in Bandwidth mode, it deliberately sends packets slightly faster than it thinks the pipe can handle to see if you get the expected latency increase or not. In normal mode, it uses the data collected in other modes to send packets at exactly the speed that maximizes bandwidth without increasing RTT.

Bravo Neal Cardwell Yuchung Cheng Soheil Hassas Yeganeh et al!

Originally shared by Dave Taht

Google’s new TCP BBR congestion control algorithm paper is now fully available online. The title is both hysterical, and accurate: “Congestion based Congestion control” – http://queue.acm.org/detail.cfm?id=3022184

One more nail in the coffin of #bufferbloat .

http://queue.acm.org/detail.cfm?id=3022184