On the Convergence of ADMM with Task Adaption and Beyond
Along with the development of learning and vision, Alternating Direction Method of Multiplier (ADMM) has become a popular algorithm for separable optimization model with linear constraint. However, the ADMM and its numerical variants (e.g., inexact, proximal or linearized) are awkward to obtain state-of-the-art performance when dealing with complex learning and vision tasks due to their weak task-adaption ability. Recently, there has been an increasing interest in incorporating task-specific computational modules (e.g., designed filters or learned architectures) into ADMM iterations. Unfortunately, these task-related modules introduce uncontrolled and unstable iterative flows, they also break the structures of the original optimization model. Therefore, existing theoretical investigations are invalid for these resulted task-specific iterations. In this paper, we develop a simple and generic proximal ADMM framework to incorporate flexible task-specific module for learning and vision problems. We rigorously prove the convergence both in objective function values and the constraint violation and provide the worst-case convergence rate measured by the iteration complexity. Our investigations not only develop new perspectives for analyzing task-adaptive ADMM but also supply meaningful guidelines on designing practical optimization methods for real-world applications. Numerical experiments are conducted to verify the theoretical results and demonstrate the efficiency of our algorithmic framework.
READ FULL TEXT