Abstract
Investigating the emotional content in speech from acoustic characteristics requires separating the semantic content from the acoustic channel. For natural emotional speech, a widely used method to separate the two channels is the use of cue masking. Our objective is to investigate the use of cue masking in non-acted emotional speech by analyzing the extent to which filtering impacts the perception of emotional content of the modified speech material. However, obtaining a corpus of emotional speech can be quite difficult whereby verifying the emotional content is an issue thoroughly discussed. Currently, speech research is showing a tendency toward constructing corpora of natural emotion expression. In this paper we outline the procedure used to obtain the corpus containing high audio quality and `natural' emotional speech. We review the use of Mood Induction Procedures which provides a method to obtain spontaneous emotional speech in a controlled environment. Following this, we propose an experiment to investigate the effects of cue masking on natural emotional speech.
Original language | English |
---|---|
Title of host publication | Face and Gesture 2011 |
Publisher | IEEE |
ISBN (Print) | 978-1-4244-9140-7 |
DOIs | |
Publication status | Published - 21 Mar 2011 |
Externally published | Yes |
Event | The 9th IEEE Conference on Automatic Face and Gesture Recognition - Santa Barbara, United States Duration: 21 Mar 2011 → 23 Mar 2011 |
Conference
Conference | The 9th IEEE Conference on Automatic Face and Gesture Recognition |
---|---|
Country/Territory | United States |
City | Santa Barbara |
Period | 21/03/11 → 23/03/11 |