Online Dual Coordinate Ascent Learning

by   Bicheng Ying, et al.

The stochastic dual coordinate-ascent (S-DCA) technique is a useful alternative to the traditional stochastic gradient-descent algorithm for solving large-scale optimization problems due to its scalability to large data sets and strong theoretical guarantees. However, the available S-DCA formulation is limited to finite sample sizes and relies on performing multiple passes over the same data. This formulation is not well-suited for online implementations where data keep streaming in. In this work, we develop an online dual coordinate-ascent (O-DCA) algorithm that is able to respond to streaming data and does not need to revisit the past data. This feature embeds the resulting construction with continuous adaptation, learning, and tracking abilities, which are particularly attractive for online learning scenarios.


page 1

page 2

page 3

page 4


Stochastic Dual Coordinate Ascent Methods for Regularized Loss Minimization

Stochastic Gradient Descent (SGD) has become popular for solving large s...

Debiasing Stochastic Gradient Descent to handle missing values

A major caveat of large scale data is their incom-pleteness. We propose ...

FADO: A Deterministic Detection/Learning Algorithm

This paper proposes and studies a detection technique for adversarial sc...

Stochastic gradient descent methods for estimation with large data sets

We develop methods for parameter estimation in settings with large-scale...

Scalable inference in functional linear regression with streaming data

Traditional static functional data analysis is facing new challenges due...

A General Distributed Dual Coordinate Optimization Framework for Regularized Loss Minimization

In modern large-scale machine learning applications, the training data a...

Please sign up or login with your details

Forgot password? Click here to reset