A Strong and Robust Baseline for Text-Image Matching

06/04/2019
by   Fangyu Liu, et al.
0

We review the current schemes of text-image matching models and propose improvements for both training and inference. First, we empirically show limitations of two popular loss (sum and max-margin loss) widely used in training text-image embeddings and propose a trade-off: a kNN-margin loss which 1) utilizes information from hard negatives and 2) is robust to noise as all K-most hardest samples are taken into account, tolerating pseudo negatives and outliers. Second, we advocate the use of Inverted Softmax (Is) and Cross-modal Local Scaling (Csls) during inference to mitigate the so-called hubness problem in high-dimensional embedding space, enhancing scores of all metrics by a large margin.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset