∂𝔹 nets: learning discrete functions by gradient descent

05/12/2023
by   Ian Wright, et al.
0

∂𝔹 nets are differentiable neural networks that learn discrete boolean-valued functions by gradient descent. ∂𝔹 nets have two semantically equivalent aspects: a differentiable soft-net, with real weights, and a non-differentiable hard-net, with boolean weights. We train the soft-net by backpropagation and then `harden' the learned weights to yield boolean weights that bind with the hard-net. The result is a learned discrete function. `Hardening' involves no loss of accuracy, unlike existing approaches to neural network binarization. Preliminary experiments demonstrate that ∂𝔹 nets achieve comparable performance on standard machine learning problems yet are compact (due to 1-bit weights) and interpretable (due to the logical nature of the learnt functions).

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset