Unimore logo AImageLab

On Gaze Deployment to Audio-Visual Cues of Social Interactions

Abstract: Attention supports our urge to forage on social cues. Under certain circumstances, we spend the majority of time scrutinising people, markedly their eyes and faces, and spotting persons that are talking. To account for such behaviour, this article develops a computational model for the deployment of gaze within a multimodal landscape, namely a conversational scene. Gaze dynamics is derived in a principled way by reformulating attention deployment as a stochastic foraging problem. Model simulation experiments on a publicly available dataset of eye-tracked subjects are presented. Results show that the simulated scan paths exhibit similar trends of eye movements of human observers watching and listening to conversational clips in a free-viewing condition


Citation:

Boccignone, G.; Cuculo, V.; D'Amelio, A.; Grossi, G.; Lanzarotti, R. "On Gaze Deployment to Audio-Visual Cues of Social Interactions" IEEE ACCESS, vol. 8, pp. 161630 -161654 , 2020 DOI: 10.1109/ACCESS.2020.3021211

 not available

Paper download: