Large Scale Multimodal Classification Using an Ensemble of Transformer Models and Co-Attention

11/23/2020
by   Varnith Chordia, et al.
0

Accurate and efficient product classification is significant for E-commerce applications, as it enables various downstream tasks such as recommendation, retrieval, and pricing. Items often contain textual and visual information, and utilizing both modalities usually outperforms classification utilizing either mode alone. In this paper we describe our methodology and results for the SIGIR eCom Rakuten Data Challenge. We employ a dual attention technique to model image-text relationships using pretrained language and image embeddings. While dual attention has been widely used for Visual Question Answering(VQA) tasks, ours is the first attempt to apply the concept for multimodal classification.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset