Unimore logo AImageLab

Visual-Semantic Alignment Across Domains Using a Semi-Supervised Approach

Abstract: Visual-semantic embeddings have been extensively used as a powerful model for cross-modal retrieval of images and sentences. In this setting, data coming from different modalities can be projected in a common embedding space, in which distances can be used to infer the similarity between pairs of images and sentences. While this approach has shown impressive performances on fully supervised settings, its application to semi-supervised scenarios has been rarely investigated. In this paper we propose a domain adaptation model for cross-modal retrieval, in which the knowledge learned from a supervised dataset can be transferred on a target dataset in which the pairing between images and sentences is not known, or not useful for training due to the limited size of the set. Experiments are performed on two target unsupervised scenarios, respectively related to the fashion and cultural heritage domain. Results show that our model is able to effectively transfer the knowledge learned on ordinary visual-semantic datasets, achieving promising results. As an additional contribution, we collect and release the dataset used for the cultural heritage domain.


Citation:

Carraggi, Angelo; Cornia, Marcella; Baraldi, Lorenzo; Cucchiara, Rita "Visual-Semantic Alignment Across Domains Using a Semi-Supervised Approach" Computer Vision – ECCV 2018 Workshops, vol. 11134, Munich, Germany, pp. 625 -640 , 8-14 September 2018, 2019 DOI: 10.1007/978-3-030-11024-6_47

 not available

Paper download:

Related research activities:

Related projects: