Unimore logo AImageLab

Robust Re-Identification by Multiple Views Knowledge Distillation

Abstract: To achieve robustness in Re-Identification, standard methods leverage tracking information in a Video-To-Video fashion. However, these solutions face a large drop in performance for single image queries (e.g., Image-To-Video setting). Recent works address this severe degradation by transferring temporal information from a Video-based network to an Image-based one. In this work, we devise a training strategy that allows the transfer of a superior knowledge, arising from a set of views depicting the target object. Our proposal - Views Knowledge Distillation (VKD) - pins this visual variety as a supervision signal within a teacher-student framework, where the teacher educates a student who observes fewer views. As a result, the student outperforms not only its teacher but also the current state-of-the-art in Image-To-Video by a wide margin (6.3% mAP on MARS, 8.6% on Duke-Video-ReId and 5% on VeRi-776). A thorough analysis - on Person, Vehicle and Animal Re-ID - investigates the properties of VKD from a qualitatively and quantitatively perspective.


Citation:

Porrello, Angelo; Bergamini, Luca; Calderara, Simone "Robust Re-Identification by Multiple Views Knowledge Distillation" Proceedings of the 16th European Conference on Computer Vision (ECCV) 2020, vol. 12355, Glasgow, Scotland, UK, pp. 93 -110 , August 23–28, 2020, 2020 DOI: 10.1007/978-3-030-58607-2_6

 not available

Paper download: