One Person, One Model, One World: Learning Continual User Representation without Forgetting

09/29/2020
by   Fajie Yuan, et al.
0

Learning generic user representations which can then be applied to other user-related tasks (e.g., profile prediction and recommendation) has recently attracted much attention. Existing approaches often derive an individual set of model parameters for each task by training their own data. However, the representation of a user usually has some potential commonalities. As such, these separately trained representations could be suboptimal in performance as well as inefficient in terms of parameter sharing. In this paper, we delve on the research to continually learn user representations task by task, whereby new tasks are learned while using parameters from old ones. A new problem arises since when new tasks are trained, previously learned parameters are very likely to be modified, and thus, an artificial neural network (ANN)-based model may lose its capacity to serve for well-trained previous tasks forever, termed as catastrophic forgetting. To address this issue, we present Conure which is the first continual, or lifelong, user representation learner – i.e., learning new tasks over time without forgetting old ones. Specifically, we propose iteratively removing unimportant weights by pruning on a well-optimized backbone representation model, enlightened by fact that neural network models are highly over-parameterized. Then, we are able to learn a coming task by sharing previous parameters and training new ones only on the empty space after pruning. We conduct extensive experiments on two real-world datasets across nine tasks and demonstrate that Conure performs largely better than common models without purposely preserving such old "knowledge", and is competitive or sometimes better than models which are trained either individually for each task or simultaneously by preparing all task data together.

READ FULL TEXT
research
03/24/2022

Probing Representation Forgetting in Supervised and Unsupervised Continual Learning

Continual Learning research typically focuses on tackling the phenomenon...
research
07/23/2019

Adaptive Compression-based Lifelong Learning

The problem of a deep learning model losing performance on a previously ...
research
04/03/2023

Knowledge Accumulation in Continually Learned Representations and the Issue of Feature Forgetting

By default, neural networks learn on all training data at once. When suc...
research
06/09/2021

Optimizing Reusable Knowledge for Continual Learning via Metalearning

When learning tasks over time, artificial neural networks suffer from a ...
research
07/17/2021

Continual Learning for Task-oriented Dialogue System with Iterative Network Pruning, Expanding and Masking

This ability to learn consecutive tasks without forgetting how to perfor...
research
04/15/2022

Incremental Prompting: Episodic Memory Prompt for Lifelong Event Detection

Lifelong event detection aims to incrementally update a model with new e...
research
05/28/2018

Perceive Your Users in Depth: Learning Universal User Representations from Multiple E-commerce Tasks

Tasks such as search and recommendation have become increas- ingly impor...

Please sign up or login with your details

Forgot password? Click here to reset