Back to Search
Start Over
Gender-Driven Emotion Recognition Through Speech Signals For Ambient Intelligence Applications
- Source :
- IEEE Transactions on Emerging Topics in Computing. 1:244-257
- Publication Year :
- 2013
- Publisher :
- Institute of Electrical and Electronics Engineers (IEEE), 2013.
-
Abstract
- This paper proposes a system that allows recognizing a person's emotional state starting from audio signal registrations. The provided solution is aimed at improving the interaction among humans and computers, thus allowing effective human-computer intelligent interaction. The system is able to recognize six emotions(anger, boredom, disgust, fear, happiness, and sadness) and the neutral state. This set of emotional states is widely used for emotion recognition purposes. It also distinguishes a single emotion versus all the other possible ones, as proven in the proposed numerical results. The system is composed of two subsystems: 1) gender recognition(GR) and 2) emotion recognition(ER). The experimental analysis shows the performance in terms of accuracy of the proposed ER system. The results highlight that the a priori knowledge of the speaker's gender allows a performance increase. The obtained results show also that the features selection adoption assures a satisfying recognition rate and allows reducing the employed features. Future developments of the proposed solution may include the implementation of this system over mobile devices such as smartphones.
- Subjects :
- Ambient intelligence
Audio signal
Computer science
media_common.quotation_subject
Speech recognition
Boredom
Speaker recognition
computer.software_genre
Computer Science Applications
Human-Computer Interaction
Sadness
Computer Science (miscellaneous)
medicine
medicine.symptom
Set (psychology)
Audio signal processing
computer
Mobile device
Information Systems
media_common
Subjects
Details
- ISSN :
- 21686750
- Volume :
- 1
- Database :
- OpenAIRE
- Journal :
- IEEE Transactions on Emerging Topics in Computing
- Accession number :
- edsair.doi.dedup.....627250519ee1f396059be593c919d9e3
- Full Text :
- https://doi.org/10.1109/tetc.2013.2274797