Por favor, use este identificador para citar o enlazar este ítem: http://inaoe.repositorioinstitucional.mx/jspui/handle/1009/2718
A Dual Attention-Based Representation for the Detection of Abusive Language in Texts and Memes
Horacio Jarquín
Hugo Jair Escalante Balderas
Manuel Montes y Gómez
Acceso Abierto
Atribución-NoComercial-SinDerivadas
Attention mechanisms
Abusive Language Detection
Natural Language processing
Texts Classification
Memes Classification
In recent years, deep neural networks have gained widespread popularity for a variety of unimodal and multimodal classification tasks. Among these, Transformerbased models have emerged as a dominant approach due to their adaptability across diverse tasks through fine-tuning and their outstanding performance in text classification, image analysis, and multimodal tasks involving both text and images. One of the key components of these architectures is the self-attention mechanism, which enables the measurement of relevance among elements within an input sequence. This mechanism is particularly effective in modeling long-range dependencies, making it a cornerstone of modern neural architectures. In addition to self-attention, the literature has introduced various other attention mechanisms, which can be broadly categorized based on how they compute the similarity between elements in two main branches. Self-attention measures the similarity among elements within the same sequence, while the contextual attention mechanism calculates the similarity of elements with respect to a contextual vector learned during the training process. Despite their utility, these mechanisms have complementary limitations: self-attention disregards the contextual relationships of elements with the global context learned during training, whereas contextual attention neglects internal relationships within the elements of a sequence. These limitations highlight the need for a mechanism that combines the strengths of both approaches. To address these challenges, this doctoral research proposes the Dual Attention (DA) mechanism, which integrates both contextual and internal relationships within a sequence to create a more comprehensive representation. The DA mechanism was evaluated on the task of abusive language detection in both textual data and memes. This task was selected due to its inherent complexity, requiring both local and global contextual understanding to accurately interpret instances. Abusive language often relies on subtle contextual cues and multimodal signals, making it an ideal testbed for the proposed mechanism. The proposed DA mechanism was rigorously tested across multiple datasets for abusive language detection in text and memes, achieving outstanding results in the majority of cases. To further extend its applicability, the mechanism was adapted for scenarios involving pairs of sequences, particularly for the multimodal task of AL detection in memes.
Instituto Nacional de Astrofísica, Óptica y Electrónica
2025-04
Tesis de doctorado
Inglés
Estudiantes
Investigadores
Público en general
Jarquín Vásquez, H. J., (2025), A Dual Attention-Based Representation for the Detection of Abusive Language in Texts and Memes, Tesis de Doctorado, Instituto Nacional de Astrofísica, Óptica y Electrónica.
TECNOLOGÍA DE LAS TELECOMUNICACIONES
Versión aceptada
acceptedVersion - Versión aceptada
Aparece en las colecciones: Doctorado en Ciencias Computacionales

Cargar archivos:


Fichero Tamaño Formato  
JARQUINVHJ_DCC.pdf9.35 MBAdobe PDFVisualizar/Abrir