Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions
Abstract: Current captioning approaches can describe images using black-box architectures whose behavior is hardly controllable and explainable from the exterior. As an image can be described in infinite ways depending on the goal and the context at hand, a higher degree of controllability is needed to apply captioning algorithms in complex scenarios. In this paper, we introduce a novel framework for image captioning which can generate diverse descriptions by allowing both grounding and controllability. Given a control signal in the form of a sequence or set of image regions, we generate the corresponding caption through a recurrent architecture which predicts textual chunks explicitly grounded on regions, following the constraints of the given control. Experiments are conducted on Flickr30k Entities and on COCO Entities, an extended version of COCO in which we add grounding annotations collected in a semi-automatic manner. Results demonstrate that our method achieves state of the art performances on controllable image captioning, in terms of caption quality and diversity. Code and annotations are publicly available at: https://github.com/aimagelab/show-control-and-tell.
Citation:
Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita "Show, Control and Tell: A Framework for Generating Controllable and Grounded Captions" 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition, vol. 2019-, Long Beach, CA, USA, pp. 8299 -8308 , June 16-20 2019, 2019 DOI: 10.1109/CVPR.2019.00850not available
Paper download:
- Author version:
- DOI: 10.1109/CVPR.2019.00850