Lower Generalization Bounds for GD and SGD in Smooth Stochastic Convex Optimization

03/19/2023
by   Peiyuan Zhang, et al.
0

Recent progress was made in characterizing the generalization error of gradient methods for general convex loss by the learning theory community. In this work, we focus on how training longer might affect generalization in smooth stochastic convex optimization (SCO) problems. We first provide tight lower bounds for general non-realizable SCO problems. Furthermore, existing upper bound results suggest that sample complexity can be improved by assuming the loss is realizable, i.e. an optimal solution simultaneously minimizes all the data points. However, this improvement is compromised when training time is long and lower bounds are lacking. Our paper examines this observation by providing excess risk lower bounds for gradient descent (GD) and stochastic gradient descent (SGD) in two realizable settings: 1) realizable with T = O(n), and (2) realizable with T = Ω(n), where T denotes the number of training iterations and n is the size of the training dataset. These bounds are novel and informative in characterizing the relationship between T and n. In the first small training horizon case, our lower bounds almost tightly match and provide the first optimal certificates for the corresponding upper bounds. However, for the realizable case with T = Ω(n), a gap exists between the lower and upper bounds. We provide a conjecture to address this problem, that the gap can be closed by improving upper bounds, which is supported by our analyses in one-dimensional and linear regression scenarios.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset