Wearable Vision for Retrieving Architectural Details in Augmented Tourist Experiences
Abstract: The interest in cultural cities is in constant growth, and so is the demand for new multimedia tools and applications that enrich their fruition. In this paper we propose an egocentric vision system to enhance tourists' cultural heritage experience. Exploiting a wearable board and a glass-mounted camera, the visitor can retrieve architectural details of the historical building he is observing and receive related multimedia contents. To obtain an effective retrieval procedure we propose a visual descriptor based on the covariance of local features. Differently than the common Bag of Words approaches our feature vector does not rely on a generated visual vocabulary, removing the dependence from a specific dataset and obtaining a reduction of the computational cost. 3D modeling is used to achieve a precise visitor's localization that allows browsing visible relevant details that the user may otherwise miss. Experimental results conducted on a publicly available cultural heritage dataset show that the proposed feature descriptor outperforms Bag of Words techniques.
Citation:
Alletto, Stefano; Serra, Giuseppe; Cucchiara, Rita "Wearable Vision for Retrieving Architectural Details in Augmented Tourist Experiences" Proceedings of the 2015 7th International Conference on Intelligent Technologies for Interactive Entertainment, INTETAIN 2015, Torino, pp. 134 -139 , 10-12 June 2015, 2015 DOI: 10.4108/icst.intetain.2015.260034not available