Using crowdsourcing for labelling emotional speech assets

Alexey Tarasov, Charlie Cullen, Sarah-Jane Delany

Research output: Contribution to conferencePaper

Abstract

The success of supervised learning approaches for the classification of emotion in speech depends highly on the quality of the training data. The manual annotation of emotion speech assets is the primary way of gathering training data for emotional speech recognition. This position paper proposes the use of crowdsourcing for the rating of emotion speech assets. Recent developments in learning from crowdsourcing offer opportunities to determine accurate ratings for assets which have been annotated by large numbers of non-expert individuals. The challenges involved include identifying good annotators, determining consensus ratings and learning the bias of annotators.
Original languageEnglish
DOIs
Publication statusPublished - 5 Oct 2010
Externally publishedYes
EventW3C workshop on Emotion Markup Language - Telecom ParisTech, Paris, France
Duration: 5 Oct 20106 Oct 2010

Workshop

WorkshopW3C workshop on Emotion Markup Language
Country/TerritoryFrance
CityParis
Period5/10/106/10/10

Fingerprint

Dive into the research topics of 'Using crowdsourcing for labelling emotional speech assets'. Together they form a unique fingerprint.

Cite this