Autor*innen mit den meisten Dokumenten
Neueste Veröffentlichungen
- ZeitschriftenartikelAffective Computing Combined with Android Science(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Becker-Asano, ChristianIn this report a number of research projects are summarized that aimed at investigating the emotional effects of android robots. In particular, those robots are focused on that have been developed and are incessantly being improved by Hiroshi Ishiguro at both the Advanced Telecommunications Research Institute International (ATR) in Kyoto and Osaka University in Osaka, Japan. Parts of the reported empirical research have been conducted by the author himself during a two-year research stay at ATR as post-doctoral fellow of the Japan Society for the Promotion of Science.In conclusion, Affective Computing research is taken to the next level by employing physical androids rather than purely virtual humans, and Android Science benefits from the experience of the Affective Computing community in devising means to assess and evaluate a human observer’s subjective impressions that android robots give rise to.
- ZeitschriftenartikelAn Evaluation of Emotion Units and Feature Types for Real-Time Speech Emotion Recognition(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Vogt, Thurid; André, ElisabethEmotion recognition from speech in real-time is an upcoming research topic and the consideration of real-time constraints concerns all aspects of the recognition system. We present here a comparison of units and feature types for speech emotion recognition. To our knowledge, a comprehensive comparison of many different units on several databases is still missing in the literature and we also discuss units with special emphasis on real-time processing, that is, we do not only consider accuracy but also speed and ease of calculation. For the feature types, we also use only features that can be extracted fully automatically in real-time and look at which types best characterise which emotion classes. Gained insights are used as validation of methodology for our online speech emotion recognition system EmoVoice.
- ZeitschriftenartikelNews(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011)
- ZeitschriftenartikelA Neuroscientific View on the Role of Emotions in Behaving Cognitive Agents(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Vitay, Julien; Hamker, Fred H.While classical theories systematically opposed emotion and cognition, suggesting that emotions perturbed the normal functioning of the rational thought, recent progress in neuroscience highlights on the contrary that emotional processes are at the core of cognitive processes, directing attention to emotionally-relevant stimuli, favoring the memorization of external events, valuating the association between an action and its consequences, biasing decision making by allowing to compare the motivational value of different goals and, more generally, guiding behavior towards fulfilling the needs of the organism. This article first proposes an overview of the brain areas involved in the emotional modulation of behavior and suggests a functional architecture allowing to perform efficient decision making. It then reviews a series of biologically-inspired computational models of emotion dealing with behavioral tasks like classical conditioning and decision making, which highlight the computational mechanisms involved in emotional behavior. It underlines the importance of embodied cognition in artificial intelligence, as emotional processing is at the core of the cognitive computations deciding which behavior is more appropriate for the agent.
- ZeitschriftenartikelInterview with Rosalind Picard(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Reichardt, Dirk M.
- ZeitschriftenartikelComputational Assessment of Interest in Speech—Facing the Real-Life Challenge(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Wöllmer, Martin; Weninger, Felix; Eyben, Florian; Schuller, BjörnAutomatic detection of a speaker’s level of interest is of high relevance for many applications, such as automatic customer care, tutoring systems, or affective agents. However, as the latest Interspeech 2010 Paralinguistic Challenge has shown, reliable estimation of non-prototypical natural interest in spontaneous conversations independent of the subject still remains a challenge. In this article, we introduce a fully automatic combination of brute-forced acoustic features, linguistic analysis, and non-linguistic vocalizations, exploiting cross-entity information in an early feature fusion. Linguistic information is based on speech recognition by a multi-stream approach fusing context-sensitive phoneme predictions and standard acoustic features. We provide subject-independent results for interest assessment using Bidirectional Long Short-Term Memory networks on the official Challenge task and show that our proposed system leads to the best recognition accuracies that have ever been reported for this task. The according TUM AVIC corpus consists of highly spontaneous speech from face-to-face commercial presentations. The techniques presented in this article are also used in the SEMAINE system, which features an emotion sensitive embodied conversational agent.
- ZeitschriftenartikelSocial Signal Interpretation (SSI)(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Wagner, Johannes; Lingenfelser, Florian; Bee, Nikolaus; André, ElisabethThe development of anticipatory user interfaces is a key issue in human-centred computing. Building systems that allow humans to communicate with a machine in the same natural and intuitive way as they would with each other requires detection and interpretation of the user’s affective and social signals. These are expressed in various and often complementary ways, including gestures, speech, mimics etc. Implementing fast and robust recognition engines is not only a necessary, but also challenging task. In this article, we introduce our Social Signal Interpretation (SSI) tool, a framework dedicated to support the development of such online recognition systems. The paper at hand discusses the processing of four modalities, namely audio, video, gesture and biosignals, with focus on affect recognition, and explains various approaches to fuse the extracted information to a final decision.
- ZeitschriftenartikelNews(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011)
- ZeitschriftenartikelBehaviour Coordination for Models of Affective Behaviour(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Rank, StefanSoftware or robotic agents that can reproduce some of the (human) phenomena labelled as emotional have a range of applications in entertainment, pedagogy, and human computer interaction in general. Based on previous experience in modelling emotion, the method of scenario-based analysis for the comparison and design of affective agent architectures as well as a new approach towards incremental modelling of emotional phenomena are introduced. The approach uses concurrent processes, resources, and explicitly modelled related limitations as building blocks for affective agent architectures in order to work towards coordination mechanisms in a concurrent model of affective competences.
- ZeitschriftenartikelSpecial Issue on Emotion and Computing(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Reichardt, Dirk M.