Obtaining speech assets for judgement analysis on low-pass filtered emotional speech

John Snel, Charlie Cullen

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1 Citation (Scopus)

Abstract

Investigating the emotional content in speech from acoustic characteristics requires separating the semantic content from the acoustic channel. For natural emotional speech, a widely used method to separate the two channels is the use of cue masking. Our objective is to investigate the use of cue masking in non-acted emotional speech by analyzing the extent to which filtering impacts the perception of emotional content of the modified speech material. However, obtaining a corpus of emotional speech can be quite difficult whereby verifying the emotional content is an issue thoroughly discussed. Currently, speech research is showing a tendency toward constructing corpora of natural emotion expression. In this paper we outline the procedure used to obtain the corpus containing high audio quality and `natural' emotional speech. We review the use of Mood Induction Procedures which provides a method to obtain spontaneous emotional speech in a controlled environment. Following this, we propose an experiment to investigate the effects of cue masking on natural emotional speech.
Original languageEnglish
Title of host publicationFace and Gesture 2011
PublisherIEEE
ISBN (Print)978-1-4244-9140-7
DOIs
Publication statusPublished - 21 Mar 2011
Externally publishedYes
EventThe 9th IEEE Conference on Automatic Face and Gesture Recognition - Santa Barbara, United States
Duration: 21 Mar 201123 Mar 2011

Conference

ConferenceThe 9th IEEE Conference on Automatic Face and Gesture Recognition
Country/TerritoryUnited States
CitySanta Barbara
Period21/03/1123/03/11

Fingerprint

Dive into the research topics of 'Obtaining speech assets for judgement analysis on low-pass filtered emotional speech'. Together they form a unique fingerprint.

Cite this