Nonparametric regression with modified ReLU networks

07/17/2022
by   Aleksandr Beknazaryan, et al.
0

We consider regression estimation with modified ReLU neural networks in which network weight matrices are first modified by a function α before being multiplied by input vectors. We give an example of continuous, piecewise linear function α for which the empirical risk minimizers over the classes of modified ReLU networks with l_1 and squared l_2 penalties attain, up to a logarithmic factor, the minimax rate of prediction of unknown β-smooth function.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
06/14/2023

Nonparametric regression using over-parameterized shallow ReLU neural networks

It is shown that over-parameterized neural networks can achieve minimax ...
research
10/16/2020

Quantile regression with ReLU Networks: Estimators and minimax rates

Quantile regression is the task of estimating a specified percentile res...
research
06/12/2019

Decoupling Gating from Linearity

ReLU neural-networks have been in the focus of many recent theoretical w...
research
06/01/2023

Learning Prescriptive ReLU Networks

We study the problem of learning optimal policy from a set of discrete t...
research
04/07/2022

A Modified Net Reclassification Improvement Statistic

The continuous net reclassification improvement (NRI) statistic is a pop...
research
04/20/2022

Deep Learning meets Nonparametric Regression: Are Weight-Decayed DNNs Locally Adaptive?

We study the theory of neural network (NN) from the lens of classical no...
research
02/06/2019

On the CVP for the root lattices via folding with deep ReLU neural networks

Point lattices and their decoding via neural networks are considered in ...

Please sign up or login with your details

Forgot password? Click here to reset