Learning to flock through reinforcement

11/05/2019
by   Mihir Durve, et al.
0

Flocks of birds, schools of fish, insects swarms are examples of coordinated motion of a group that arises spontaneously from the action of many individuals. Here, we study flocking behavior from the viewpoint of multi-agent reinforcement learning. In this setting, a learning agent tries to keep contact with the group using as sensory input the velocity of its neighbors. This goal is pursued by each learning individual by exerting a limited control on its own direction of motion. By means of standard reinforcement learning algorithms we show that: i) a learning agent exposed to a group of teachers, i.e. hard-wired flocking agents, learns to follow them, and ii) that in the absence of teachers, a group of independently learning agents evolves towards a state where each agent knows how to flock. In both scenarios, i) and ii), the emergent policy (or navigation strategy) corresponds to the polar velocity alignment mechanism of the well-known Vicsek model. These results show that a) such a velocity alignment may have naturally evolved as an adaptive behavior that aims at minimizing the rate of neighbor loss, and b) prove that this alignment does not only favor (local) polar order, but it corresponds to best policy/strategy to keep group cohesion when the sensory input is limited to the velocity of neighboring agents. In short, to stay together, steer together.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset