SynthCap: Augmenting Transformers with Synthetic Data for Image Captioning
Abstract: Image captioning is a challenging task that combines Computer Vision and Natural Language Processing to generate descriptive and accurate textual descriptions for input images. Research efforts in this field mainly focus on developing novel architectural components to extend image captioning models and using large-scale image-text datasets crawled from the web to boost final performance. In this work, we explore an alternative to web-crawled data and augment the training dataset with synthetic images generated by a latent diffusion model. In particular, we propose a simple yet effective synthetic data augmentation framework that is capable of significantly improving the quality of captions generated by a standard Transformer-based model, leading to competitive results on the COCO dataset.
Citation:
Caffagni, Davide; Barraco, Manuele; Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita "SynthCap: Augmenting Transformers with Synthetic Data for Image Captioning" Proceedings of the 22nd International Conference on Image Analysis and Processing, vol. 14233, Udine, Italy, pp. 112 -123 , September 11-15, 2023, 2023 DOI: 10.1007/978-3-031-43148-7_10not available
Paper download:
- Author version:
- DOI: 10.1007/978-3-031-43148-7_10