Extending Universal Approximation Guarantees: A Theoretical Justification for the Continuity of Real-World Learning Tasks

12/06/2022
by   Naveen Durvasula, et al.
0

Universal Approximation Theorems establish the density of various classes of neural network function approximators in C(K, ℝ^m), where K ⊂ℝ^n is compact. In this paper, we aim to extend these guarantees by establishing conditions on learning tasks that guarantee their continuity. We consider learning tasks given by conditional expectations x ↦E[Y | X = x], where the learning target Y = f ∘ L is a potentially pathological transformation of some underlying data-generating process L. Under a factorization L = T ∘ W for the data-generating process where T is thought of as a deterministic map acting on some random input W, we establish conditions (that might be easily verified using knowledge of T alone) that guarantee the continuity of practically any derived learning task x ↦E[f ∘ L | X = x]. We motivate the realism of our conditions using the example of randomized stable matching, thus providing a theoretical justification for the continuity of real-world learning tasks.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset