Learning articulated body models for people re-identification
Abstract: People re-identification is a challenging problem in surveillance and forensics and it aims at associating multiple instances of the same person which have been acquired from different points of view and after a temporal gap. Image-based appearance features are usually adopted but, in addition to their intrinsically low discriminability, they are subject to perspective and view-point issues. We propose to completely change the approach by mapping local descriptors extracted from RGB-D sensors on a 3D body model for creating a view-independent signature. An original bone-wise color descriptor is generated and reduced with PCA to compute the person signature. The virtual bone set used to map appearance features is learned using a recursive splitting approach. Finally, people matching for re-identification is performed using the Relaxed Pairwise Metric Learning, which simultaneously provides feature reduction and weighting. Experiments on a specific dataset created with the Microsoft Kinect sensor and the OpenNi libraries prove the advantages of the proposed technique with respect to state of the art methods based on 2D or non-articulated 3D body models.
Citation:Baltieri, Davide; Vezzani, Roberto; Cucchiara, Rita "Learning articulated body models for people re-identification" Proceedings of the 21st ACM international conference on Multimedia - MM '13, Barcelona, pp. 557 -560 , October 21-25, 2013, 2013 DOI: 10.1145/2502081.2502147
- Author version:
- DOI: 10.1145/2502081.2502147