Por favor, use este identificador para citar o enlazar este ítem: http://inaoe.repositorioinstitucional.mx/jspui/handle/1009/1956
Acoustic feature selection and classification of emotions in speech using a 3D continuous emotion model
Humberto Pérez Espinosa
CARLOS ALBERTO REYES GARCIA
Luis Villaseñor Pineda
Acceso Abierto
Atribución-NoComercial-SinDerivadas
Automatic emotion recognition
Continuous emotion model
Feature selection
In this paper we report the results obtained from experiments with a database of emotional speech in English in order to find the most important acoustic features to estimate Emotion Primitives which determine the emotional content on speech. We are interested in exploiting the potential benefits of continuous emotion models, so in this paper we demonstrate the feasibility of applying this approach to annotation of emotional speech and we explore ways to take advantage of this kind of annotation to improve the automatic classification of basic emotions.
Elsevier Ltd.
2012
Artículo
Inglés
Estudiantes
Investigadores
Público en general
Perez-Espinosa, H., et al., (2012). Acoustic feature selection and classification of emotions in speech using a 3D continuous emotion model, Biomedical Signal Processing and Control, (7): 79–87
CIENCIA DE LOS ORDENADORES
Versión aceptada
acceptedVersion - Versión aceptada
Aparece en las colecciones: Artículos de Ciencias Computacionales

Cargar archivos:


Fichero Tamaño Formato  
29 Reyes_Electronc7-1.pdf526.38 kBAdobe PDFVisualizar/Abrir