Differentially Private Temporal Difference Learning with Stochastic Nonconvex-Strongly-Concave Optimization

01/25/2022
by   Canzhe Zhao, et al.
0

Temporal difference (TD) learning is a widely used method to evaluate policies in reinforcement learning. While many TD learning methods have been developed in recent years, little attention has been paid to preserving privacy and most of the existing approaches might face the concerns of data privacy from users. To enable complex representative abilities of policies, in this paper, we consider preserving privacy in TD learning with nonlinear value function approximation. This is challenging because such a nonlinear problem is usually studied in the formulation of stochastic nonconvex-strongly-concave optimization to gain finite-sample analysis, which would require simultaneously preserving the privacy on primal and dual sides. To this end, we employ a momentum-based stochastic gradient descent ascent to achieve a single-timescale algorithm, and achieve a good trade-off between meaningful privacy and utility guarantees of both the primal and dual sides by perturbing the gradients on both sides using well-calibrated Gaussian noises. As a result, our DPTD algorithm could provide (ϵ,δ)-differential privacy (DP) guarantee for the sensitive information encoded in transitions and retain the original power of TD learning, with the utility upper bounded by 𝒪((dlog(1/δ))^1/8/(nϵ)^1/4) (The tilde in this paper hides the log factor.), where n is the trajectory length and d is the dimension. Extensive experiments conducted in OpenAI Gym show the advantages of our proposed algorithm.

READ FULL TEXT
research
01/22/2022

Differentially Private SGDA for Minimax Problems

Stochastic gradient descent ascent (SGDA) and its variants have been the...
research
08/23/2020

Single-Timescale Stochastic Nonconvex-Concave Optimization for Smooth Nonlinear TD Learning

Temporal-Difference (TD) learning with nonlinear smooth function approxi...
research
10/30/2019

Efficient Privacy-Preserving Nonconvex Optimization

While many solutions for privacy-preserving convex empirical risk minimi...
research
02/24/2023

Differentially Private Algorithms for the Stochastic Saddle Point Problem with Optimal Rates for the Strong Gap

We show that convex-concave Lipschitz stochastic saddle point problems (...
research
10/17/2022

Stochastic Differentially Private and Fair Learning

Machine learning models are increasingly used in high-stakes decision-ma...
research
02/08/2023

DIFF2: Differential Private Optimization via Gradient Differences for Nonconvex Distributed Learning

Differential private optimization for nonconvex smooth objective is cons...
research
10/28/2019

Improved Differentially Private Decentralized Source Separation for fMRI Data

Blind source separation algorithms such as independent component analysi...

Please sign up or login with your details

Forgot password? Click here to reset