Explore and Explain: Self-supervised Navigation and Recounting
Abstract: Embodied AI has been recently gaining attention as it aims to foster the development of autonomous and intelligent agents. In this paper, we devise a novel embodied setting in which an agent needs to explore a previously unknown environment while recounting what it sees during the path. In this context, the agent needs to navigate the environment driven by an exploration goal, select proper moments for description, and output natural language descriptions of relevant objects and scenes. Our model integrates a novel self-supervised exploration module with penalty, and a fully-attentive captioning model for explanation. Also, we investigate different policies for selecting proper moments for explanation, driven by information coming from both the environment and the navigation. Experiments are conducted on photorealistic environments from the Matterport3D dataset and investigate the navigation and explanation capabilities of the agent as well as the role of their interactions.
Citation:Bigazzi, Roberto; Landi, Federico; Cornia, Marcella; Cascianelli, Silvia; Baraldi, Lorenzo; Cucchiara, Rita "Explore and Explain: Self-supervised Navigation and Recounting" Proceedings of the 25th International Conference on Pattern Recognition, Milan, Italy, 10-15 January 2021, 2020 DOI: 10.1109/ICPR48806.2021.9412628
- Author version:
- DOI: 10.1109/ICPR48806.2021.9412628