Will Multi-modal Data Improves Few-shot Learning?

07/25/2021
by   Zilun Zhang, et al.
0

Most few-shot learning models utilize only one modality of data. We would like to investigate qualitatively and quantitatively how much will the model improve if we add an extra modality (i.e. text description of the image), and how it affects the learning procedure. To achieve this goal, we propose four types of fusion method to combine the image feature and text feature. To verify the effectiveness of improvement, we test the fusion methods with two classical few-shot learning models - ProtoNet and MAML, with image feature extractors such as ConvNet and ResNet12. The attention-based fusion method works best, which improves the classification accuracy by a large margin around 30 comparing to the baseline result.

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset