Unimore logo AImageLab

AC-VRNN: Attentive Conditional-VRNN for multi-future trajectory prediction

Abstract: Anticipating human motion in crowded scenarios is essential for developing intelligent transportation systems, social-aware robots and advanced video surveillance applications. A key component of this task is represented by the inherently multi-modal nature of human paths which makes socially acceptable multiple futures when human interactions are involved. To this end, we propose a generative architecture for multi-future trajectory predictions based on Conditional Variational Recurrent Neural Networks (C-VRNNs). Conditioning mainly relies on prior belief maps, representing most likely moving directions and forcing the model to consider past observed dynamics in generating future positions. Human interactions are modelled with a graph-based attention mechanism enabling an online attentive hidden state refinement of the recurrent estimation. To corroborate our model, we perform extensive experiments on publicly-available datasets (e.g., ETH/UCY, Stanford Drone Dataset, STATS SportVU NBA, Intersection Drone Dataset and TrajNet++) and demonstrate its effectiveness in crowded scenes compared to several state-of-the-art methods.


Citation:

Bertugli, A.; Calderara, S.; Coscia, P.; Ballan, L.; Cucchiara, R. "AC-VRNN: Attentive Conditional-VRNN for multi-future trajectory prediction" COMPUTER VISION AND IMAGE UNDERSTANDING, vol. 210, pp. 103245 -103257 , 2021 DOI: 10.1016/j.cviu.2021.103245

 not available

Paper download: