Auflistung Künstliche Intelligenz 25(3) - August 2011 nach Schlagwort "Affective computing"
1 - 4 von 4
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelAffective Computing Combined with Android Science(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Becker-Asano, ChristianIn this report a number of research projects are summarized that aimed at investigating the emotional effects of android robots. In particular, those robots are focused on that have been developed and are incessantly being improved by Hiroshi Ishiguro at both the Advanced Telecommunications Research Institute International (ATR) in Kyoto and Osaka University in Osaka, Japan. Parts of the reported empirical research have been conducted by the author himself during a two-year research stay at ATR as post-doctoral fellow of the Japan Society for the Promotion of Science.In conclusion, Affective Computing research is taken to the next level by employing physical androids rather than purely virtual humans, and Android Science benefits from the experience of the Affective Computing community in devising means to assess and evaluate a human observer’s subjective impressions that android robots give rise to.
- ZeitschriftenartikelComputational Assessment of Interest in Speech—Facing the Real-Life Challenge(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Wöllmer, Martin; Weninger, Felix; Eyben, Florian; Schuller, BjörnAutomatic detection of a speaker’s level of interest is of high relevance for many applications, such as automatic customer care, tutoring systems, or affective agents. However, as the latest Interspeech 2010 Paralinguistic Challenge has shown, reliable estimation of non-prototypical natural interest in spontaneous conversations independent of the subject still remains a challenge. In this article, we introduce a fully automatic combination of brute-forced acoustic features, linguistic analysis, and non-linguistic vocalizations, exploiting cross-entity information in an early feature fusion. Linguistic information is based on speech recognition by a multi-stream approach fusing context-sensitive phoneme predictions and standard acoustic features. We provide subject-independent results for interest assessment using Bidirectional Long Short-Term Memory networks on the official Challenge task and show that our proposed system leads to the best recognition accuracies that have ever been reported for this task. The according TUM AVIC corpus consists of highly spontaneous speech from face-to-face commercial presentations. The techniques presented in this article are also used in the SEMAINE system, which features an emotion sensitive embodied conversational agent.
- ZeitschriftenartikelDesigning Emotions(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Kipp, Michael; Dackweiler, Thomas; Gebhard, PatrickWhile current virtual characters may look photorealistic they often lack behavioral complexity. Emotion may be the key ingredient to create behavioral variety, social adaptivity and thus believability. While various models of emotion have been suggested, the concrete parametrization must often be designed by the implementer. We propose to enhance an implemented affect simulator called ALMA (A Layered Model of Affect) by learning the parametrization of the underlying OCC model through user studies. Users are asked to rate emotional intensity in a variety of described situations. We then use regression analysis to recreate these reactions in the OCC model. We present a tool called EMIMOTO (EMotion Intensity MOdeling TOol) in conjunction with the ALMA simulation tool. Our approach is a first step toward empirically parametrized emotion models that try to reflect user expectations.
- ZeitschriftenartikelSocial Signal Interpretation (SSI)(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Wagner, Johannes; Lingenfelser, Florian; Bee, Nikolaus; André, ElisabethThe development of anticipatory user interfaces is a key issue in human-centred computing. Building systems that allow humans to communicate with a machine in the same natural and intuitive way as they would with each other requires detection and interpretation of the user’s affective and social signals. These are expressed in various and often complementary ways, including gestures, speech, mimics etc. Implementing fast and robust recognition engines is not only a necessary, but also challenging task. In this article, we introduce our Social Signal Interpretation (SSI) tool, a framework dedicated to support the development of such online recognition systems. The paper at hand discusses the processing of four modalities, namely audio, video, gesture and biosignals, with focus on affect recognition, and explains various approaches to fuse the extracted information to a final decision.