Decentralized Stochastic Optimization with Inherent Privacy Protection

by   Yongqiang Wang, et al.

Decentralized stochastic optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing. Since involved data usually contain sensitive information like user locations, healthcare records and financial transactions, privacy protection has become an increasingly pressing need in the implementation of decentralized stochastic optimization algorithms. In this paper, we propose a decentralized stochastic gradient descent algorithm which is embedded with inherent privacy protection for every participating agent against other participating agents and external eavesdroppers. This proposed algorithm builds in a dynamics based gradient-obfuscation mechanism to enable privacy protection without compromising optimization accuracy, which is in significant difference from differential-privacy based privacy solutions for decentralized optimization that have to trade optimization accuracy for privacy. The dynamics based privacy approach is encryption-free, and hence avoids incurring heavy communication or computation overhead, which is a common problem with encryption based privacy solutions for decentralized stochastic optimization. Besides rigorously characterizing the convergence performance of the proposed decentralized stochastic gradient descent algorithm under both convex objective functions and non-convex objective functions, we also provide rigorous information-theoretic analysis of its strength of privacy protection. Simulation results for a distributed estimation problem as well as numerical experiments for decentralized learning on a benchmark machine learning dataset confirm the effectiveness of the proposed approach.


page 4

page 8

page 10

page 11

page 12

page 13

page 15

page 16


Quantization enabled Privacy Protection in Decentralized Stochastic Optimization

By enabling multiple agents to cooperatively solve a global optimization...

Decentralized Nonconvex Optimization with Guaranteed Privacy and Accuracy

Privacy protection and nonconvexity are two challenging problems in dece...

Swarming for Faster Convergence in Stochastic Optimization

We study a distributed framework for stochastic optimization which is in...

Killing Two Birds with One Stone: Quantization Achieves Privacy in Distributed Learning

Communication efficiency and privacy protection are two critical issues ...

Gradient tracking and variance reduction for decentralized optimization and machine learning

Decentralized methods to solve finite-sum minimization problems are impo...

Stochastic Gradient Langevin Dynamics Based on Quantized Optimization

Stochastic learning dynamics based on Langevin or Levy stochastic differ...

Stochastic optimization approaches to learning concise representations

We propose and study a method for learning interpretable features via st...

Please sign up or login with your details

Forgot password? Click here to reset