Person re-identification based on Res2Net network
Person re-identification (re-ID) has been gaining in popularity in the research community owing to its numerous applications and growing importance in the surveillance industry. Person re-ID remains challenging due to significant intra-class variations across different cameras. In this paper, we propose a multi-task network that simultaneously computes the identification loss and verification loss. Given a pair of input images, the network predicts the identities of the two input images and whether they belong to the same identity. In order to obtain deeper feature information of pedestrians, we propose to use the latest Res2Net network for feature extraction. Experiments on several large-scale person re-ID benchmark datasets demonstrate the accuracy of our approach. For example, rank-1 accuracies are 82.67 (+0.21) for the DukeMTMC and Market-1501 datasets, respectively. The proposed method shows encouraging improvements compared with state-of-the-art methods.
READ FULL TEXT