Tabula: Efficiently Computing Nonlinear Activation Functions for Secure Neural Network Inference

03/05/2022
by   Maximilian Lam, et al.
0

Multiparty computation approaches to secure neural network inference traditionally rely on garbled circuits for securely executing nonlinear activation functions. However, garbled circuits require excessive communication between server and client, impose significant storage overheads, and incur large runtime penalties. To eliminate these costs, we propose an alternative to garbled circuits: Tabula, an algorithm based on secure lookup tables. Tabula leverages neural networks' ability to be quantized and employs a secure lookup table approach to efficiently, securely, and accurately compute neural network nonlinear activation functions. Compared to garbled circuits with quantized inputs, when computing individual nonlinear functions, our experiments show Tabula uses between 35 ×-70 × less communication, is over 100× faster, and uses a comparable amount of storage. This leads to significant performance gains over garbled circuits with quantized inputs during secure inference on neural networks: Tabula reduces overall communication by up to 9 × and achieves a speedup of up to 50 ×, while imposing comparable storage costs.

READ FULL TEXT

page 1

page 2

page 3

page 4

research
07/26/2021

Sisyphus: A Cautionary Tale of Using Low-Degree Polynomial Activations in Privacy-Preserving Deep Learning

Privacy concerns in client-server machine learning have given rise to pr...
research
09/22/2018

Design Space Exploration of Neural Network Activation Function Circuits

The widespread application of artificial neural networks has prompted re...
research
01/12/2023

On the explainability of quantum neural networks based on variational quantum circuits

Ridge functions are used to describe and study the lower bound of the ap...
research
06/20/2016

A New Training Method for Feedforward Neural Networks Based on Geometric Contraction Property of Activation Functions

We propose a new training method for a feedforward neural network having...
research
08/27/2023

The inverse problem for neural networks

We study the problem of computing the preimage of a set under a neural n...
research
07/11/2022

SIMC 2.0: Improved Secure ML Inference Against Malicious Clients

In this paper, we study the problem of secure ML inference against a mal...
research
04/06/2020

LogicNets: Co-Designed Neural Networks and Circuits for Extreme-Throughput Applications

Deployment of deep neural networks for applications that require very hi...

Please sign up or login with your details

Forgot password? Click here to reset