On Packing Low-Diameter Spanning Trees

06/12/2020
by   Julia Chuzhoy, et al.
0

Edge connectivity of a graph is one of the most fundamental graph-theoretic concepts. The celebrated tree packing theorem of Tutte and Nash-Williams from 1961 states that every k-edge connected graph G contains a collection T of ⌊ k/2 ⌋ edge-disjoint spanning trees, that we refer to as a tree packing; the diameter of the tree packing T is the largest diameter of any tree in T. A desirable property of a tree packing, that is both sufficient and necessary for leveraging the high connectivity of a graph in distributed communication, is that its diameter is low. Yet, despite extensive research in this area, it is still unclear how to compute a tree packing, whose diameter is sublinear in |V(G)|, in a low-diameter graph G, or alternatively how to show that such a packing does not exist. In this paper we provide first non-trivial upper and lower bounds on the diameter of tree packing. First, we show that, for every k-edge connected n-vertex graph G of diameter D, there is a tree packing T of size Ω(k), diameter O((101klog n)^D), that causes edge-congestion at most 2. Second, we show that for every k-edge connected n-vertex graph G of diameter D, the diameter of G[p] is O(k^D(D+1)/2) with high probability, where G[p] is obtained by sampling each edge of G independently with probability p=Θ(log n/k). This provides a packing of Ω(k/log n) edge-disjoint trees of diameter at most O(k^(D(D+1)/2)) each. We then prove that these two results are nearly tight. Lastly, we show that if every pair of vertices in a graph has k edge-disjoint paths of length at most D connecting them, then there is a tree packing of size k, diameter O(Dlog n), causing edge-congestion O(log n). We also provide several applications of low-diameter tree packing in distributed computation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset
Success!
Error Icon An error occurred

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro