Privacy-Adversarial User Representations in Recommender Systems

by   Yehezkel S. Resheff, et al.

Latent factor models for recommender systems represent users and items as low dimensional vectors. Privacy risks have been previously studied mostly in the context of recovery of personal information in the form of usage records from the training data. However, the user representations themselves may be used together with external data to recover private user information such as gender and age. In this paper we show that user vectors calculated by a common recommender system can be exploited in this way. We propose the privacy-adversarial framework to eliminate such leakage, and study the trade-off between recommender performance and leakage both theoretically and empirically using a benchmark dataset. We briefly discuss further applications of this method towards the generation of deeper and more insightful recommendations.


page 1

page 2

page 3

page 4


Membership Inference Attacks Against Recommender Systems

Recently, recommender systems have achieved promising performances and b...

On the User Behavior Leakage from Recommender System Exposure

Modern recommender systems are trained to predict users potential future...

Private Recommender Systems: How Can Users Build Their Own Fair Recommender Systems without Log Data?

Fairness is an important property in data-mining applications, including...

Towards Principled User-side Recommender Systems

Traditionally, recommendation algorithms have been designed for service ...

Generative Interest Estimation for Document Recommendations

Learning distributed representations of documents has pushed the state-o...

Personalization, Privacy, and Me

News recommendation and personalization is not a solved problem. People ...

On-Device User Intent Prediction for Context and Sequence Aware Recommendation

The pursuit of improved accuracy in recommender systems has led to the i...

Please sign up or login with your details

Forgot password? Click here to reset