Agnostic proper learning of monotone functions: beyond the black-box correction barrier

04/05/2023
by   Jane Lange, et al.
0

We give the first agnostic, efficient, proper learning algorithm for monotone Boolean functions. Given 2^Õ(√(n)/ε) uniformly random examples of an unknown function f:{± 1}^n →{± 1}, our algorithm outputs a hypothesis g:{± 1}^n →{± 1} that is monotone and (opt + ε)-close to f, where opt is the distance from f to the closest monotone function. The running time of the algorithm (and consequently the size and evaluation time of the hypothesis) is also 2^Õ(√(n)/ε), nearly matching the lower bound of Blais et al (RANDOM '15). We also give an algorithm for estimating up to additive error ε the distance of an unknown function f to monotone using a run-time of 2^Õ(√(n)/ε). Previously, for both of these problems, sample-efficient algorithms were known, but these algorithms were not run-time efficient. Our work thus closes this gap in our knowledge between the run-time and sample complexity. This work builds upon the improper learning algorithm of Bshouty and Tamon (JACM '96) and the proper semiagnostic learning algorithm of Lange, Rubinfeld, and Vasilyan (FOCS '22), which obtains a non-monotone Boolean-valued hypothesis, then “corrects” it to monotone using query-efficient local computation algorithms on graphs. This black-box correction approach can achieve no error better than 2opt + ε information-theoretically; we bypass this barrier by a) augmenting the improper learner with a convex optimization step, and b) learning and correcting a real-valued function before rounding its values to Boolean. Our real-valued correction algorithm solves the “poset sorting” problem of [LRV22] for functions over general posets with non-Boolean labels.

READ FULL TEXT
research
04/25/2022

Properly learning monotone functions via local reconstruction

We give a 2^Õ(√(n)/ε)-time algorithm for properly learning monotone Bool...
research
11/18/2020

Isoperimetric Inequalities for Real-Valued Functions with Applications to Monotonicity Testing

We generalize the celebrated isoperimetric inequality of Khot, Minzer, a...
research
04/14/2022

Testing distributional assumptions of learning algorithms

There are many important high dimensional function classes that have fas...
research
11/16/2022

Improved Monotonicity Testers via Hypercube Embeddings

We show improved monotonicity testers for the Boolean hypercube under th...
research
03/21/2020

Black-box Methods for Restoring Monotonicity

In many practical applications, heuristic or approximation algorithms ar...
research
02/10/2022

Monotone Learning

The amount of training-data is one of the key factors which determines t...
research
09/27/2020

Learning event-driven switched linear systems

We propose an automata theoretic learning algorithm for the identificati...

Please sign up or login with your details

Forgot password? Click here to reset