Learning Action-Transferable Policy with Action Embedding
Despite achieving great success on performance in various sequential decision task, deep reinforcement learning is extremely data inefficient. Many approaches have been proposed to improve the data efficiency, e.g. transfer learning which utilizes knowledge learned from related tasks to accelerate training. Previous researches on transfer learning mostly attempt to learn a common feature space of states across related tasks to exploit knowledge as much as possible. However, semantic information of actions may be shared as well, even between tasks with different action space size. In this work, we first propose a method to learn action embedding for discrete actions in RL from generated trajectories without any prior knowledge, and then leverage it to transfer policy across tasks with different state space and/or discrete action space. We validate our method on a set of gridworld navigation tasks, discretized continuous control tasks and fighting tasks in a commercial video game. Our experimental results show that our method can effectively learn informative action embeddings and accelerate learning by policy transfer across tasks.
READ FULL TEXT