Sparsifying Distributed Algorithms with Ramifications in Massively Parallel Computation and Centralized Local Computation

07/17/2018
by   Mohsen Ghaffari, et al.
0

We introduce a method for sparsifying distributed algorithms and exhibit how it leads to improvements that go past known barriers in two algorithmic settings of large-scale graph processing: Massively Parallel Computation (MPC), and Local Computation Algorithms (LCA). - MPC with Strongly Sublinear Memory: Recently, there has been growing interest in obtaining MPC algorithms that are faster than their classic O( n)-round parallel counterparts for problems such as MIS, Maximal Matching, 2-Approximation of Minimum Vertex Cover, and (1+ϵ)-Approximation of Maximum Matching. Currently, all such MPC algorithms require Ω̃(n) memory per machine. Czumaj et al. [STOC'18] were the first to handle Ω̃(n) memory, running in O(( n)^2) rounds. We obtain Õ(√(Δ))-round MPC algorithms for all these four problems that work even when each machine has memory n^α for any constant α∈ (0, 1). Here, Δ denotes the maximum degree. These are the first sublogarithmic-time algorithms for these problems that break the linear memory barrier. - LCAs with Query Complexity Below the Parnas-Ron Paradigm: Currently, the best known LCA for MIS has query complexity Δ^O(Δ) poly( n), by Ghaffari [SODA'16]. As pointed out by Rubinfeld, obtaining a query complexity of poly(Δ n) remains a central open question. Ghaffari's bound almost reaches a Δ^Ω(Δ/Δ) barrier common to all known MIS LCAs, which simulate distributed algorithms by learning the local topology, à la Parnas-Ron [TCS'07]. This barrier follows from the Ω(Δ/Δ) distributed lower bound of Kuhn, et al. [JACM'16]. We break this barrier and obtain an MIS LCA with query complexity Δ^O(Δ) poly( n).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset