When to be critical? Performance and evolvability in different regimes of neural Ising agents

by   Sina Khajehabdollahi, et al.

It has long been hypothesized that operating close to the critical state is beneficial for natural, artificial and their evolutionary systems. We put this hypothesis to test in a system of evolving foraging agents controlled by neural networks that can adapt agents' dynamical regime throughout evolution. Surprisingly, we find that all populations that discover solutions, evolve to be subcritical. By a resilience analysis, we find that there are still benefits of starting the evolution in the critical regime. Namely, initially critical agents maintain their fitness level under environmental changes (for example, in the lifespan) and degrade gracefully when their genome is perturbed. At the same time, initially subcritical agents, even when evolved to the same fitness, are often inadequate to withstand the changes in the lifespan and degrade catastrophically with genetic perturbations. Furthermore, we find the optimal distance to criticality depends on the task complexity. To test it we introduce a hard and simple task: for the hard task, agents evolve closer to criticality whereas more subcritical solutions are found for the simple task. We verify that our results are independent of the selected evolutionary mechanisms by testing them on two principally different approaches: a genetic algorithm and an evolutionary strategy. In summary, our study suggests that although optimal behaviour in the simple task is obtained in a subcritical regime, initializing near criticality is important to be efficient at finding optimal solutions for new tasks of unknown complexity.


page 10

page 14

page 18


The dynamical regime and its importance for evolvability, task performance and generalization

It has long been hypothesized that operating close to the critical state...

Balancing Selection Pressures, Multiple Objectives, and Neural Modularity to Coevolve Cooperative Agent Behavior

Previous research using evolutionary computation in Multi-Agent Systems ...

Bootstrapping of memetic from genetic evolution via inter-agent selection pressures

We create an artificial system of agents (attention-based neural network...

Why don't the modules dominate - Investigating the Structure of a Well-Known Modularity-Inducing Problem Domain

Wagner's modularity inducing problem domain is a key contribution to the...

Evolutionary Stability of Other-Regarding Preferences Under Complexity Costs

The evolution of preferences that account for other agents' fitness, or ...

Evolving Antennas for Ultra-High Energy Neutrino Detection

Evolutionary algorithms borrow from biology the concepts of mutation and...

The downside of heterogeneity: How established relations counteract systemic adaptivity in tasks assignments

We study the lock-in effect in a network of task assignments. Agents hav...

Please sign up or login with your details

Forgot password? Click here to reset