Feature Boosting, Suppression, and Diversification for Fine-Grained Visual Classification
Learning feature representation from discriminative local regions plays a key role in fine-grained visual classification. Employing attention mechanisms to extract part features has become a trend. However, there are two major limitations in these methods: First, they often focus on the most salient part while neglecting other inconspicuous but distinguishable parts. Second, they treat different part features in isolation while neglecting their relationships. To handle these limitations, we propose to locate multiple different distinguishable parts and explore their relationships in an explicit way. In this pursuit, we introduce two lightweight modules that can be easily plugged into existing convolutional neural networks. On one hand, we introduce a feature boosting and suppression module that boosts the most salient part of feature maps to obtain a part-specific representation and suppresses it to force the following network to mine other potential parts. On the other hand, we introduce a feature diversification module that learns semantically complementary information from the correlated part-specific representations. Our method does not need bounding boxes/part annotations and can be trained end-to-end. Extensive experimental results show that our method achieves state-of-the-art performances on several benchmark fine-grained datasets.
READ FULL TEXT