1. Congestion Control for Cross-Datacenter Networks
- Author
-
Yibo Zhu, Lei Cui, Kai Chen, Ge Chen, Wei Bai, Dongsu Han, and Gaoxiong Zeng
- Subjects
TCP Vegas ,business.industry ,Computer science ,Computer Networks and Communications ,ComputerSystemsOrganization_COMPUTER-COMMUNICATIONNETWORKS ,Testbed ,020206 networking & telecommunications ,Linux kernel ,02 engineering and technology ,Computer Science Applications ,Network congestion ,Wide area network ,Packet loss ,020204 information systems ,0202 electrical engineering, electronic engineering, information engineering ,Latency (engineering) ,Electrical and Electronic Engineering ,business ,Software-defined networking ,Software ,Computer network - Abstract
Geographically distributed applications hosted on cloud are becoming prevalent. They run on cross-datacenter network that consists of multiple data center networks (DCNs) connected by a wide area network (WAN). Such a cross-DC network imposes significant challenges in transport design because the DCN and WAN segments have vastly distinct characteristics (e.g., butter depths, RTTs). In this paper, we find that existing DCN or WAN transports reacting to ECN or delay alone do not (and cannot be extended to) work well for such an environment. The key reason is that neither of the signals, by itself, can simultaneously capture the location and degree of congestion. This is due to the discrepancies between DCN and WAN. Motivated by this, we present the design and implementation of GEMINI that strategically integrates both ECN and delay signals for cross-DC congestion control. To achieve low latency, GEMINI bounds the inter-DC latency with delay signal and prevents the intra-DC packet loss with ECN. To maintain high throughput, GEMINI modulates the window dynamics and maintains low butter occupancy utilizing both congestion signals. GEMINI is implemented in Linux kernel and evaluated by extensive testbed experiments. Results show that GEMINI achieves up to 53%, 31% and 76% reduction of small flow average completion times compared to TCP Cubic, DCTCP and BBR; and up to 58% reduction of large flow average completion times compared to TCP Vegas.
- Published
- 2022
- Full Text
- View/download PDF