Audio Emotion Recognation

It is well known that human speech accommodates not only the linguistic content but also the emotional state of the speaker. Therefore in applications that require humanmachine interaction, it is important that emotional states in human speech are fully perceived by computers [1]. The classification step in emotion recognition is well advanced, however the determination of a set of well distinguishing features is a difficult task that requires selection among hundreds of different features.

We present a novel system for audio emotion recognition based on the Perceptual Evaluation of Audio Quality (PEAQ) model as described by the standard, ITU-R BS.1387-1 which provides a mathematical model resembling the human auditory system. The introduced feature set performs perceptual analysis in time, spectral and Bark domains thus enabling us to represent the statistics of emotional audio for arousal and valence modes with a small number of features. Unlike the existing systems, the proposed feature set learns statistical characteristic of emotional differences hence does not require data normalization to eliminate speaker or corpus dependency. Recognition performance obtained for the well known VAM and EMO-DB corpora show that the classification accuracy achieved by the proposed feature set outperforms the reported benchmarking results particularly for valence both for natural and acted emotional data.