Average Consensus: A Little Learning Goes A Long Way

10/12/2020
by   Bernadette Charron-Bost, et al.
0

When networked systems of autonomous agents carry out complex tasks, the control and coordination sought after generally depend on a few fundamental control primitives. Chief among these primitives is consensus, where agents are to converge to a common estimate within the range of initial values, which becomes average consensus when the joint limit should be the average of the initial values. To provide reliable services that are easy to deploy, these primitives should operate even when the network is subject to frequent and unpredictable changes. Moreover, they should mobilize few computational resources so that low powered, deterministic, and anonymous agents can partake in the network. In this stringent adversarial context, we investigate the distributed implementation of these primitives over networks with bidirectional, but potentially short-lived, communication links. Inspired by the classic EqualNeighbor and Metropolis agreement rules for multi-agent systems, we design distributed algorithms for consensus and average consensus, which we show to operate in polynomial time in a synchronous temporal model. These algorithms are fully distributed, requiring neither symmetry-breaking devices such as unique identifiers, nor global control or knowledge of the network. Our strategy consists in making agents learn simple structural parameters of the network – namely, their largest degrees – which constitutes enough information to build simple update rules, implementable locally with little computational and memory overhead.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset