Visual Question Answering in Remote Sensing with Cross-Attention and Multimodal Information Bottleneck

06/25/2023
by   Jayesh Songara, et al.
0

In this research, we deal with the problem of visual question answering (VQA) in remote sensing. While remotely sensed images contain information significant for the task of identification and object detection, they pose a great challenge in their processing because of high dimensionality, volume and redundancy. Furthermore, processing image information jointly with language features adds additional constraints, such as mapping the corresponding image and language features. To handle this problem, we propose a cross attention based approach combined with information maximization. The CNN-LSTM based cross-attention highlights the information in the image and language modalities and establishes a connection between the two, while information maximization learns a low dimensional bottleneck layer, that has all the relevant information required to carry out the VQA task. We evaluate our method on two VQA remote sensing datasets of different resolutions. For the high resolution dataset, we achieve an overall accuracy of 79.11 sets while for the low resolution dataset, we achieve an overall accuracy of 85.98

READ FULL TEXT

Please sign up or login with your details

Forgot password? Click here to reset