Multi-Task Imitation Learning for Linear Dynamical Systems

12/01/2022
by   Thomas T. Zhang, et al.
0

We study representation learning for efficient imitation learning over linear systems. In particular, we consider a setting where learning is split into two phases: (a) a pre-training step where a shared k-dimensional representation is learned from H source policies, and (b) a target policy fine-tuning step where the learned representation is used to parameterize the policy class. We find that the imitation gap over trajectories generated by the learned target policy is bounded by Õ( k n_x/HN_shared + k n_u/N_target), where n_x > k is the state dimension, n_u is the input dimension, N_shared denotes the total amount of data collected for each policy during representation learning, and N_target is the amount of target task data. This result formalizes the intuition that aggregating data across related tasks to learn a representation can significantly improve the sample efficiency of learning a target task. The trends suggested by this bound are corroborated in simulation.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset