Neuro-Modulated Hebbian Learning for Fully Test-Time Adaptation

03/02/2023
by   Yushun Tang, et al.
0

Fully test-time adaptation aims to adapt the network model based on sequential analysis of input samples during the inference stage to address the cross-domain performance degradation problem of deep neural networks. We take inspiration from the biological plausibility learning where the neuron responses are tuned based on a local synapse-change procedure and activated by competitive lateral inhibition rules. Based on these feed-forward learning rules, we design a soft Hebbian learning process which provides an unsupervised and effective mechanism for online adaptation. We observe that the performance of this feed-forward Hebbian learning for fully test-time adaptation can be significantly improved by incorporating a feedback neuro-modulation layer. It is able to fine-tune the neuron responses based on the external feedback generated by the error back-propagation from the top inference layers. This leads to our proposed neuro-modulated Hebbian learning (NHL) method for fully test-time adaptation. With the unsupervised feed-forward soft Hebbian learning being combined with a learned neuro-modulator to capture feedback from external responses, the source model can be effectively adapted during the testing process. Experimental results on benchmark datasets demonstrate that our proposed method can significantly improve the adaptation performance of network models and outperforms existing state-of-the-art methods.

READ FULL TEXT
research
09/23/2022

TeST: Test-time Self-Training under Distribution Shift

Despite their recent success, deep neural networks continue to perform p...
research
02/16/2022

Learning to Generalize across Domains on Single Test Samples

We strive to learn a model from a set of source domains that generalizes...
research
05/13/2021

Adaptive Test-Time Augmentation for Low-Power CPU

Convolutional Neural Networks (ConvNets) are trained offline using the f...
research
10/23/2022

SC-wLS: Towards Interpretable Feed-forward Camera Re-localization

Visual re-localization aims to recover camera poses in a known environme...
research
10/05/2017

Stacked Structure Learning for Lifted Relational Neural Networks

Lifted Relational Neural Networks (LRNNs) describe relational domains us...
research
07/20/2023

Feed-Forward Source-Free Domain Adaptation via Class Prototypes

Source-free domain adaptation has become popular because of its practica...
research
02/09/2021

MaskNet: Introducing Feature-Wise Multiplication to CTR Ranking Models by Instance-Guided Mask

Click-Through Rate(CTR) estimation has become one of the most fundamenta...

Please sign up or login with your details

Forgot password? Click here to reset