Hard Problems are Easier for Success-based Parameter Control

Recent works showed that simple success-based rules for self-adjusting parameters in evolutionary algorithms (EAs) can match or outperform the best fixed parameters on discrete problems. Non-elitism in a (1,λ) EA combined with a self-adjusting offspring population size λ outperforms common EAs on the multimodal Cliff problem. However, it was shown that this only holds if the success rate s that governs self-adjustment is small enough. Otherwise, even on OneMax, the self-adjusting (1,λ) EA stagnates on an easy slope, where frequent successes drive down the offspring population size. We show that self-adjustment works as intended in the absence of easy slopes. We define everywhere hard functions, for which successes are never easy to find and show that the self-adjusting (1,λ) EA is robust with respect to the choice of success rates s. We give a general fitness-level upper bound on the number of evaluations and show that the expected number of generations is at most O(d + log(1/p_min)) where d is the number of non-optimal fitness values and p_min is the smallest probability of finding an improvement from a non-optimal search point. We discuss implications for the everywhere hard function LeadingOnes and a new class OneMaxBlocks of everywhere hard functions with tunable difficulty.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset

Sign in with Google

×

Use your Google Account to sign in to DeepAI

×

Consider DeepAI Pro