Sharp bounds on the price of bandit feedback for several models of mistake-bounded online learning
We determine sharp bounds on the price of bandit feedback for several variants of the mistake-bound model. The first part of the paper presents bounds on the r-input weak reinforcement model and the r-input delayed, ambiguous reinforcement model. In both models, the adversary gives r inputs in each round and only indicates a correct answer if all r guesses are correct. The only difference between the two models is that in the delayed, ambiguous model, the learner must answer each input before receiving the next input of the round, while the learner receives all r inputs at once in the weak reinforcement model. In the second part of the paper, we introduce models for online learning with permutation patterns, in which a learner attempts to learn a permutation from a set of permutations by guessing statistics related to sub-permutations. For these permutation models, we prove sharp bounds on the price of bandit feedback.
READ FULL TEXT