Human Capacities for Emotion Recognition and their Implications for Computer Vision
dc.contributor.author | Liebold, Benny | |
dc.contributor.author | Richter, René | |
dc.contributor.author | Teichmann, Michael | |
dc.contributor.author | Hamker, Fred H. | |
dc.contributor.author | Ohler, Peter | |
dc.contributor.editor | Ziegler, Jürgen | |
dc.date.accessioned | 2017-06-17T18:54:35Z | |
dc.date.available | 2017-06-17T18:54:35Z | |
dc.date.issued | 2015 | |
dc.description.abstract | Current models for automated emotion recognition are developed under the assumption that emotion expressions are distinct expression patterns for basic emotions. Thereby, these approaches fail to account for the emotional processes underlying emotion expressions. We review the literature on human emotion processing and suggest an alternative approach to affective computing. We postulate that the generalizability and robustness of these models can be greatly increased by three major steps: (1) modeling emotional processes as a necessary foundation of emotion recognition; (2) basing models of emotional processes on our knowledge about the human brain; (3) conceptualizing emotions based on appraisal processes and thus regarding emotion expressions as expressive behavior linked to these appraisals rather than fixed neuro-motor patterns. Since modeling emotional processes after neurobiological processes can be considered a long-term effort, we suggest that researchers should focus on early appraisals, which evaluate intrinsic stimulus properties with little higher cortical involvement. With this goal in mind, we focus on the amygdala and its neural connectivity pattern as a promising structure for early emotional processing. We derive a model for the amygdala-visual cortex circuit from the current state of neuroscientific research. This model is capable of conditioning visual stimuli with body reactions to enable rapid emotional processing of stimuli consistent with early stages of psychological appraisal theories. Additionally, amygdala activity can feed back to visual areas to modulate attention allocation according to the emotional relevance of a stimulus. The implications of the model considering other approaches to automated emotion recognition are discussed. | |
dc.identifier.pissn | 2196-6826 | |
dc.language.iso | en | |
dc.publisher | De Gruyter | |
dc.relation.ispartof | i-com: Vol. 14, No. 2 | |
dc.subject | Emotions in HCI | |
dc.subject | Emotion Recognition | |
dc.subject | Neural Networks | |
dc.title | Human Capacities for Emotion Recognition and their Implications for Computer Vision | |
dc.type | Text/Journal Article | |
gi.citation.publisherPlace | Berlin | |
gi.citation.startPage | 126–137 | |
gi.document.quality | digidoc |