On the Sample Complexity of Representation Learning in Multi-task Bandits with Global and Local structure
We investigate the sample complexity of learning the optimal arm for multi-task bandit problems. Arms consist of two components: one that is shared across tasks (that we call representation) and one that is task-specific (that we call predictor). The objective is to learn the optimal (representation, predictor)-pair for each task, under the assumption that the optimal representation is common to all tasks. Within this framework, efficient learning algorithms should transfer knowledge across tasks. We consider the best-arm identification problem for a fixed confidence, where, in each round, the learner actively selects both a task, and an arm, and observes the corresponding reward. We derive instance-specific sample complexity lower bounds satisfied by any (δ_G,δ_H)-PAC algorithm (such an algorithm identifies the best representation with probability at least 1-δ_G, and the best predictor for a task with probability at least 1-δ_H). We devise an algorithm OSRL-SC whose sample complexity approaches the lower bound, and scales at most as H(Glog(1/δ_G)+ Xlog(1/δ_H)), with X,G,H being, respectively, the number of tasks, representations and predictors. By comparison, this scaling is significantly better than the classical best-arm identification algorithm that scales as HGXlog(1/δ).
READ FULL TEXT