Predicting Human Eye Fixations via an LSTM-based Saliency Attentive Model
Abstract: Data-driven saliency has recently gained a lot of attention thanks to the use of Convolutional Neural Networks for predicting gaze fixations. In this paper we go beyond standard approaches to saliency prediction, in which gaze maps are computed with a feed-forward network, and present a novel model which can predict accurate saliency maps by incorporating neural attentive mechanisms. The core of our solution is a Convolutional LSTM that focuses on the most salient regions of the input image to iteratively refine the predicted saliency map. Additionally, to tackle the center bias typical of human eye fixations, our model can learn a set of prior maps generated with Gaussian functions. We show, through an extensive evaluation, that the proposed architecture outperforms the current state of the art on public saliency prediction datasets. We further study the contribution of each key component to demonstrate their robustness on different scenarios.
Citation:
Cornia, Marcella; Baraldi, Lorenzo; Serra, Giuseppe; Cucchiara, Rita "Predicting Human Eye Fixations via an LSTM-based Saliency Attentive Model" IEEE TRANSACTIONS ON IMAGE PROCESSING, vol. 27, pp. 5142 -5154 , 2018 DOI: 10.1109/TIP.2018.2851672not available
Paper download:
- Author version:
- DOI: 10.1109/TIP.2018.2851672