AUTHOR=Zhang Yanjun , Lin Yanru , Yang Xu TITLE=AA-RGTCN: reciprocal global temporal convolution network with adaptive alignment for video-based person re-identification JOURNAL=Frontiers in Neuroscience VOLUME=18 YEAR=2024 URL=https://www.frontiersin.org/journals/neuroscience/articles/10.3389/fnins.2024.1329884 DOI=10.3389/fnins.2024.1329884 ISSN=1662-453X ABSTRACT=

Person re-identification(Re-ID) aims to retrieve pedestrians under different cameras. Compared with image-based Re-ID, video-based Re-ID extracts features from video sequences that contain both spatial features and temporal features. Existing methods usually focus on the most attractive image parts, and this will lead to redundant spatial description and insufficient temporal description. Other methods that take temporal clues into consideration usually ignore misalignment between frames and only focus on a fixed length of one given sequence. In this study, we proposed a Reciprocal Global Temporal Convolution Network with Adaptive Alignment(AA-RGTCN). The structure could address the drawback of misalignment between frames and model discriminative temporal representation. Specifically, the Adaptive Alignment block is designed to shift each frame adaptively to its best position for temporal modeling. Then, we proposed the Reciprocal Global Temporal Convolution Network to model robust temporal features across different time intervals along both normal and inverted time order. The experimental results show that our AA-RGTCN can achieve 85.9% mAP and 91.0% Rank-1 on MARS, 90.6% Rank-1 on iLIDS-VID, and 96.6% Rank-1 on PRID-2011, indicating we could gain better performance than other state-of-the-art approaches.