Fixflow: A Framework to Evaluate Fixed-point Arithmetic in Light-Weight CNN Inference

02/19/2023
by   Farhad Taheri, et al.
0

Convolutional neural networks (CNN) are widely used in resource-constrained devices in IoT applications. In order to reduce the computational complexity and memory footprint, the resource-constrained devices use fixed-point representation. This representation consumes less area and energy in hardware with similar classification accuracy compared to the floating-point ones. However, to employ the low-precision fixed-point representation, various considerations to gain high accuracy are required. Although many quantization and re-training techniques are proposed to improve the inference accuracy, these approaches are time-consuming and require access to the entire dataset. This paper investigates the effect of different fixed-point hardware units on CNN inference accuracy. To this end, we provide a framework called Fixflow to evaluate the effect of fixed-point computations performed at hardware level on CNN classification accuracy. We can employ different fixed-point considerations at the hardware accelerators.This includes rounding methods and adjusting the precision of the fixed-point operation's result. Fixflow can determine the impact of employing different arithmetic units (such as truncated multipliers) on CNN classification accuracy. Moreover, we evaluate the energy and area consumption of these units in hardware accelerators. We perform experiments on two common MNIST and CIFAR-10 datasets. Our results show that employing different methods at the hardware level specially with low-precision, can significantly change the classification accuracy.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset