Recursive Optimization of Convex Risk Measures: Mean-Semideviation Models
We develop and analyze stochastic subgradient methods for optimizing a new, versatile, application-friendly and tractable class of convex risk measures, termed here as mean-semideviations. Their construction relies on on the concept of a risk regularizer, a one-dimensional nonlinear map with certain properties, essentially generalizing the positive part weighting function in the mean-upper-semideviation risk measure. After we formally introduce mean-semideviations, we study their basic properties, and we present a fundamental constructive characterization result, demonstrating their generality. We then introduce and rigorously analyze the MESSAGEp algorithm, an efficient stochastic subgradient procedure for iteratively solving convex mean-semideviation risk-averse problems to optimality. The MESSAGEp algorithm may be derived as an application of the T-SCGD algorithm of (Yang et al., 2018). However, the generic theoretical framework of (Yang et al., 2018) is too narrow and structurally restrictive, as far as optimization of mean-semideviations is concerned, including the classical mean-upper-semideviation risk measure. By exploiting problem structure, we propose a substantially weaker theoretical framework, under which we establish pathwise convergence of the MESSAGEp algorithm, under the same strong sense as in (Yang et al., 2018). The new framework reveals a fundamental trade-off between the smoothness of the random position function and that of the particular mean-semideviation risk measure under consideration. Further, we explicitly show that the class of mean-semideviation problems supported under our framework is strictly larger than the respective class of problems supported in (Yang et al., 2018). Thus, applicability of compositional stochastic optimization is established for a strictly wider spectrum of mean-semideviation problems, justifying the purpose of our work.
READ FULL TEXT