On the Fair Comparison of Optimization Algorithms in Different Machines

by   Etor Arza, et al.

An experimental comparison of two or more optimization algorithms requires the same computational resources to be assigned to each algorithm. When a maximum runtime is set as the stopping criterion, all algorithms need to be executed in the same machine if they are to use the same resources. Unfortunately, the implementation code of the algorithms is not always available, which means that running the algorithms to be compared in the same machine is not always possible. And even if they are available, some optimization algorithms might be costly to run, such as training large neural-networks in the cloud. In this paper, we consider the following problem: how do we compare the performance of a new optimization algorithm B with a known algorithm A in the literature if we only have the results (the objective values) and the runtime in each instance of algorithm A? Particularly, we present a methodology that enables a statistical analysis of the performance of algorithms executed in different machines. The proposed methodology has two parts. First, we propose a model that, given the runtime of an algorithm in a machine, estimates the runtime of the same algorithm in another machine. This model can be adjusted so that the probability of estimating a runtime longer than what it should be is arbitrarily low. Second, we introduce an adaptation of the one-sided sign test that uses a modified p-value and takes into account that probability. Such adaptation avoids increasing the probability of type I error associated with executing algorithms A and B in different machines.


page 1

page 2

page 3

page 4


Prasatul Matrix: A Direct Comparison Approach for Analyzing Evolutionary Optimization Algorithms

The performance of individual evolutionary optimization algorithms is mo...

A New K means Grey Wolf Algorithm for Engineering Problems

Purpose: The development of metaheuristic algorithms has increased by re...

Learning Sign-Constrained Support Vector Machines

Domain knowledge is useful to improve the generalization performance of ...

An Iterative Vertex Enumeration Method for Objective Space Based Multiobjective Optimization Algorithms

A recent application area of vertex enumeration problem (VEP) is the usa...

Learning to Optimize Neural Nets

Learning to Optimize is a recently proposed framework for learning optim...

Drift Analysis and Evolutionary Algorithms Revisited

One of the easiest randomized greedy optimization algorithms is the foll...

Parameters for the best convergence of an optimization algorithm On-The-Fly

What really sparked my interest was how certain parameters worked better...

Please sign up or login with your details

Forgot password? Click here to reset