Online Aggregation of Unbounded Losses Using Shifting Experts with Confidence
We develop the setting of sequential prediction based on shifting experts and on a "smooth" version of the method of specialized experts. To aggregate experts predictions, we use the AdaHedge algorithm, which is a version of the Hedge algorithm with adaptive learning rate, and extend it by the meta-algorithm Fixed Share. Due to this, we combine the advantages of both algorithms: (1) we use the shifting regret which is a more optimal characteristic of the algorithm; (2) regret bounds are valid in the case of signed unbounded losses of the experts. Also, (3) we incorporate in this scheme a "smooth" version of the method of specialized experts which allows us to make more flexible and accurate predictions. All results are obtained in the adversarial setting -- no assumptions are made about the nature of data source. We present results of numerical experiments for short-term forecasting of electricity consumption based on a real data.
READ FULL TEXT