Logo des Repositoriums
 

Computational Assessment of Interest in Speech—Facing the Real-Life Challenge

dc.contributor.authorWöllmer, Martin
dc.contributor.authorWeninger, Felix
dc.contributor.authorEyben, Florian
dc.contributor.authorSchuller, Björn
dc.date.accessioned2018-01-08T09:15:15Z
dc.date.available2018-01-08T09:15:15Z
dc.date.issued2011
dc.description.abstractAutomatic detection of a speaker’s level of interest is of high relevance for many applications, such as automatic customer care, tutoring systems, or affective agents. However, as the latest Interspeech 2010 Paralinguistic Challenge has shown, reliable estimation of non-prototypical natural interest in spontaneous conversations independent of the subject still remains a challenge. In this article, we introduce a fully automatic combination of brute-forced acoustic features, linguistic analysis, and non-linguistic vocalizations, exploiting cross-entity information in an early feature fusion. Linguistic information is based on speech recognition by a multi-stream approach fusing context-sensitive phoneme predictions and standard acoustic features. We provide subject-independent results for interest assessment using Bidirectional Long Short-Term Memory networks on the official Challenge task and show that our proposed system leads to the best recognition accuracies that have ever been reported for this task. The according TUM AVIC corpus consists of highly spontaneous speech from face-to-face commercial presentations. The techniques presented in this article are also used in the SEMAINE system, which features an emotion sensitive embodied conversational agent.
dc.identifier.pissn1610-1987
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/11220
dc.publisherSpringer
dc.relation.ispartofKI - Künstliche Intelligenz: Vol. 25, No. 3
dc.relation.ispartofseriesKI - Künstliche Intelligenz
dc.subjectAffective computing
dc.subjectInterest recognition
dc.subjectLong short-term memory
dc.subjectRecurrent neural networks
dc.titleComputational Assessment of Interest in Speech—Facing the Real-Life Challenge
dc.typeText/Journal Article
gi.citation.endPage234
gi.citation.startPage225

Dateien