Dart: Divide and Specialize for Fast Response to Congestion in RDMA-based Datacenter Networks
Though Remote Direct Memory Access (RDMA) promises to reduce datacenter network latencies significantly compared to TCP (e.g., 10x), end-to-end congestion control in the presence of incasts is a challenge. Targeting the full generality of the congestion problem, previous schemes rely on slow, iterative convergence to the appropriate sending rates (e.g., TIMELY takes 50 RTTs). We leverage the result in several papers that most congestion in datacenter networks occurs at the receiver. Accordingly, we propose a divide-and-specialize approach, called Dart, which isolates the common case of receiver congestion and further subdivides the remaining in-network congestion into the simpler spatially-localized and the harder spatially-dispersed cases. For receiver congestion, Dart proposes direct apportioning of sending rates (DASR) in which a receiver for n senders directs each sender to cut its rate by a factor of n, converging in only one RTT. For the spatially-localized case, Dart employs deflection by adding novel switch hardware for in-order flow deflection (IOFD) because RDMA disallows packet reordering, providing fast (under one RTT), light-weight response. For the uncommon spatially-dispersed case, Dart falls back to DCQCN. Small- scale testbed measurements and at-scale simulations, respectively, show that Dart achieves 60 lower 99th percentile latency, and similar and 58 and TIMELY and DCQCN
READ FULL TEXT