Efficient Multitask Learning on Resource-Constrained Systems

02/25/2023
by   Yubo Luo, et al.
0

We present Antler, which exploits the affinity between all pairs of tasks in a multitask inference system to construct a compact graph representation of the task set and finds an optimal order of execution of the tasks such that the end-to-end time and energy cost of inference is reduced while the accuracy remains similar to the state-of-the-art. The design of Antler is based on two observations: first, tasks running on the same platform shows affinity, which is leveraged to find a compact graph representation of the tasks that helps avoid unnecessary computations of overlapping subtasks in the task set; and second, tasks that run on the same system may have dependencies, which is leveraged to find an optimal ordering of the tasks that helps avoid unnecessary computations of the dependent tasks or the remaining portion of a task. We implement two systems: a 16-bit TI MSP430FR5994-based custom-designed ultra-low-power system, and a 32-bit ARM Cortex M4/M7-based off-the-shelf STM32H747 board. We conduct both dataset-driven experiments as well as real-world deployments with these systems. We observe that Antler's execution time and energy consumption are the lowest compared to all baseline systems and by leveraging the similarity of tasks and by reusing the intermediate results from previous task, Antler reduces the inference time by 2.3X – 4.6X and saves 56% – 78% energy, when compared to the state-of-the-art.

READ FULL TEXT

page 6

page 8

page 9

research
11/02/2018

Ultra-Low Power Crypto-Engine Based on Simon 32/64 for Energy- and Area-Constrained Integrated Systems

This paper proposes an ultra-low power crypto-engine achieving sub-pJ/bi...
research
06/08/2020

SEFR: A Fast Linear-Time Classifier for Ultra-Low Power Devices

One of the fundamental challenges for running machine learning algorithm...
research
10/04/2018

Towards Fast and Energy-Efficient Binarized Neural Network Inference on FPGA

Binarized Neural Network (BNN) removes bitwidth redundancy in classical ...
research
05/10/2019

Energy-Aware Scheduling of Task Graphs with Imprecise Computations and End-to-End Deadlines

Imprecise computations provide an avenue for scheduling algorithms devel...
research
06/24/2023

Boosting Multitask Learning on Graphs through Higher-Order Task Affinities

Predicting node labels on a given graph is a widely studied problem with...
research
11/29/2020

Learning Affinity-Aware Upsampling for Deep Image Matting

We show that learning affinity in upsampling provides an effective and e...
research
09/17/2018

Powerful, transferable representations for molecules through intelligent task selection in deep multitask networks

Chemical representations derived from deep learning are emerging as a po...

Please sign up or login with your details

Forgot password? Click here to reset