On the Convergence of Policy in Unregularized Policy Mirror Descent

05/17/2022
by   Dachao Lin, et al.
0

In this short note, we give the convergence analysis of the policy in the recent famous policy mirror descent (PMD). We mainly consider the unregularized setting following [11] with generalized Bregman divergence. The difference is that we directly give the convergence rates of policy under generalized Bregman divergence. Our results are inspired by the convergence of value function in previous works and are an extension study of policy mirror descent. Though some results have already appeared in previous work, we further discover a large body of Bregman divergences could give finite-step convergence to an optimal policy, such as the classical Euclidean distance.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset