Empirical Hypothesis Space Reduction

09/04/2019
by   Akihiro Yabe, et al.
0

Selecting appropriate regularization coefficients is critical to performance with respect to regularized empirical risk minimization problems. Existing theoretical approaches attempt to determine the coefficients in order for regularized empirical objectives to be upper-bounds of true objectives, uniformly over a hypothesis space. Such an approach is, however, known to be over-conservative, especially in high-dimensional settings with large hypothesis space. In fact, an existing generalization error bound in variance-based regularization is O(√(d n/n)), where d is the dimension of hypothesis space, and thus the number of samples required for convergence linearly increases with respect to d. This paper proposes an algorithm that calculates regularization coefficient, one which results in faster convergence of generalization error O(√( n/n)) and whose leading term is independent of the dimension d. This faster convergence without dependence on the size of the hypothesis space is achieved by means of empirical hypothesis space reduction, which, with high probability, successfully reduces a hypothesis space without losing the true optimum solution. Calculation of uniform upper bounds over reduced spaces, then, enables acceleration of the convergence of generalization error.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset