Por favor, use este identificador para citar o enlazar este ítem: http://inaoe.repositorioinstitucional.mx/jspui/handle/1009/1863
Multimodal indexing based on semantic cohesion for image retrieval
Hugo Jair Escalante Balderas
Manuel Montes y Gómez
Luis Enrique Sucar Succar
Acceso Abierto
Atribución-NoComercial-SinDerivadas
Multimedia image retrieval
Image annotation
Distributional term representations
Semantic cohesion modeling
This paper introduces two novel strategies for representing multimodal images with application to multimedia image retrieval. We consider images that are composed of both text and labels: while text describes the image content at a very high semantic level (e.g., making reference to places, dates or events), labels provide a mid-level description of the image (i.e., in terms of the objects that can be seen in the image). Accordingly, the main assumption of this work is that by combining information from text and labels we can develop very effective retrieval methods. We study standard information fusion techniques for combining both sources of information. However, whereas the performance of such techniques is highly competitive, they cannot capture effectively the content of images. Therefore, we propose two novel representations for multimodal images that attempt to exploit the semantic cohesion among terms from different modalities. Such representations are based on distributional term representations widely used in computational linguistics. Under the considered representations the content of an image is modeled by a distribution of co-occurrences over terms or of occurrences over other images, in such a way that the representation can be considered an expansion of the multimodal terms in the image. We report experimental results using the SAIAPR TC12 benchmark on two sets of topics used in ImageCLEF competitions with manually and automatically generated labels. Experimental results show that the proposed representations outperform significantly both, standard multimodal techniques and unimodal methods. Results on manually assigned labels provide an upper bound in the retrieval performance that can be obtained, whereas results with automatically generated labels are encouraging. The novel representations are able to capture more effectively the content of multimodal images. We emphasize that although we have applied our representations to multimedia image retrieval the same formulation can be adopted for modeling other multimodal documents (e.g., videos).
Springer Science + Business Media
2012
Artículo
Inglés
Estudiantes
Investigadores
Público en general
Escalante-Balderas, H.J., et al., (2012). Multimodal indexing based on semantic cohesion for image retrieval, Information Retrieval, Vol. 15 (1): 1–32
CIENCIA DE LOS ORDENADORES
Versión aceptada
acceptedVersion - Versión aceptada
Aparece en las colecciones: Artículos de Ciencias Computacionales

Cargar archivos:


Fichero Tamaño Formato  
12 Escalante_2012_Retrieval15.pdf840.28 kBAdobe PDFVisualizar/Abrir