ShiftNAS: Improving One-shot NAS via Probability Shift

07/17/2023
by   Mingyang Zhang, et al.
0

One-shot Neural architecture search (One-shot NAS) has been proposed as a time-efficient approach to obtain optimal subnet architectures and weights under different complexity cases by training only once. However, the subnet performance obtained by weight sharing is often inferior to the performance achieved by retraining. In this paper, we investigate the performance gap and attribute it to the use of uniform sampling, which is a common approach in supernet training. Uniform sampling concentrates training resources on subnets with intermediate computational resources, which are sampled with high probability. However, subnets with different complexity regions require different optimal training strategies for optimal performance. To address the problem of uniform sampling, we propose ShiftNAS, a method that can adjust the sampling probability based on the complexity of subnets. We achieve this by evaluating the performance variation of subnets with different complexity and designing an architecture generator that can accurately and efficiently provide subnets with the desired complexity. Both the sampling probability and the architecture generator can be trained end-to-end in a gradient-based manner. With ShiftNAS, we can directly obtain the optimal model architecture and parameters for a given computational complexity. We evaluate our approach on multiple visual network models, including convolutional neural networks (CNNs) and vision transformers (ViTs), and demonstrate that ShiftNAS is model-agnostic. Experimental results on ImageNet show that ShiftNAS can improve the performance of one-shot NAS without additional consumption. Source codes are available at https://github.com/bestfleer/ShiftNAS.

READ FULL TEXT

page 4

page 5

page 14

research
04/28/2023

PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search

The wide application of pre-trained models is driving the trend of once-...
research
07/21/2022

Efficient Search of Multiple Neural Architectures with Different Complexities via Importance Sampling

Neural architecture search (NAS) aims to automate architecture design pr...
research
11/18/2020

AttentiveNAS: Improving Neural Architecture Search via Attentive Sampling

Neural architecture search (NAS) has shown great promise designing state...
research
08/14/2021

FOX-NAS: Fast, On-device and Explainable Neural Architecture Search

Neural architecture search can discover neural networks with good perfor...
research
02/28/2023

PA DA: Jointly Sampling PAth and DAta for Consistent NAS

Based on the weight-sharing mechanism, one-shot NAS methods train a supe...
research
06/21/2023

Balanced Mixture of SuperNets for Learning the CNN Pooling Architecture

Downsampling layers, including pooling and strided convolutions, are cru...
research
06/23/2019

One-Shot Neural Architecture Search Through A Posteriori Distribution Guided Sampling

The emergence of one-shot approaches has greatly advanced the research o...

Please sign up or login with your details

Forgot password? Click here to reset