Auflistung nach Schlagwort "Affective computing"
1 - 9 von 9
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelAffective Computing Combined with Android Science(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Becker-Asano, ChristianIn this report a number of research projects are summarized that aimed at investigating the emotional effects of android robots. In particular, those robots are focused on that have been developed and are incessantly being improved by Hiroshi Ishiguro at both the Advanced Telecommunications Research Institute International (ATR) in Kyoto and Osaka University in Osaka, Japan. Parts of the reported empirical research have been conducted by the author himself during a two-year research stay at ATR as post-doctoral fellow of the Japan Society for the Promotion of Science.In conclusion, Affective Computing research is taken to the next level by employing physical androids rather than purely virtual humans, and Android Science benefits from the experience of the Affective Computing community in devising means to assess and evaluate a human observer’s subjective impressions that android robots give rise to.
- ZeitschriftenartikelComputational Assessment of Interest in Speech—Facing the Real-Life Challenge(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Wöllmer, Martin; Weninger, Felix; Eyben, Florian; Schuller, BjörnAutomatic detection of a speaker’s level of interest is of high relevance for many applications, such as automatic customer care, tutoring systems, or affective agents. However, as the latest Interspeech 2010 Paralinguistic Challenge has shown, reliable estimation of non-prototypical natural interest in spontaneous conversations independent of the subject still remains a challenge. In this article, we introduce a fully automatic combination of brute-forced acoustic features, linguistic analysis, and non-linguistic vocalizations, exploiting cross-entity information in an early feature fusion. Linguistic information is based on speech recognition by a multi-stream approach fusing context-sensitive phoneme predictions and standard acoustic features. We provide subject-independent results for interest assessment using Bidirectional Long Short-Term Memory networks on the official Challenge task and show that our proposed system leads to the best recognition accuracies that have ever been reported for this task. The according TUM AVIC corpus consists of highly spontaneous speech from face-to-face commercial presentations. The techniques presented in this article are also used in the SEMAINE system, which features an emotion sensitive embodied conversational agent.
- ZeitschriftenartikelDesign Blueprint for Stress-Sensitive Adaptive Enterprise Systems(Business & Information Systems Engineering: Vol. 59, No. 4, 2017) Adam, Marc T. P.; Gimpel, Henner; Maedche, Alexander; Riedl, RenéStress is a major problem in the human society, impairing the well-being, health, performance, and productivity of many people worldwide. Most notably, people increasingly experience stress during human-computer interactions because of the ubiquity of and permanent connection to information and communication technologies. This phenomenon is referred to as technostress. Enterprise systems, designed to improve the productivity of organizations, frequently contribute to this technostress and thereby counteract their objective. Based on theoretical foundations and input from exploratory interviews and focus group discussions, the paper presents a design blueprint for stress-sensitive adaptive enterprise systems (SSAESes). A major characteristic of SSAESes is that bio-signals (e.g., heart rate or skin conductance) are integrated as real-time stress measures, with the goal that systems automatically adapt to the users’ stress levels, thereby improving human-computer interactions. Various design interventions on the individual, technological, and organizational levels promise to directly affect stressors or moderate the impact of stressors on important negative effects (e.g., health or performance). However, designing and deploying SSAESes pose significant challenges with respect to technical feasibility, social and ethical acceptability, as well as adoption and use. Considering these challenges, the paper proposes a 4-stage step-by-step implementation approach. With this Research Note on technostress in organizations, the authors seek to stimulate the discussion about a timely and important phenomenon, particularly from a design science research perspective.
- ZeitschriftenartikelDesigning Emotions(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Kipp, Michael; Dackweiler, Thomas; Gebhard, PatrickWhile current virtual characters may look photorealistic they often lack behavioral complexity. Emotion may be the key ingredient to create behavioral variety, social adaptivity and thus believability. While various models of emotion have been suggested, the concrete parametrization must often be designed by the implementer. We propose to enhance an implemented affect simulator called ALMA (A Layered Model of Affect) by learning the parametrization of the underlying OCC model through user studies. Users are asked to rate emotional intensity in a variety of described situations. We then use regression analysis to recreate these reactions in the OCC model. We present a tool called EMIMOTO (EMotion Intensity MOdeling TOol) in conjunction with the ALMA simulation tool. Our approach is a first step toward empirically parametrized emotion models that try to reflect user expectations.
- ZeitschriftenartikelI Feel I Feel You: A Theory of Mind Experiment in Games(KI - Künstliche Intelligenz: Vol. 34, No. 1, 2020) Melhart, David; Yannakakis, Georgios N.; Liapis, AntoniosIn this study into the player’s emotional theory of mind (ToM) of gameplaying agents, we investigate how an agent’s behaviour and the player’s own performance and emotions shape the recognition of a frustrated behaviour. We focus on the perception of frustration as it is a prevalent affective experience in human-computer interaction. We present a testbed game tailored towards this end, in which a player competes against an agent with a frustration model based on theory. We collect gameplay data, an annotated ground truth about the player’s appraisal of the agent’s frustration, and apply face recognition to estimate the player’s emotional state. We examine the collected data through correlation analysis and predictive machine learning models, and find that the player’s observable emotions are not correlated highly with the perceived frustration of the agent. This suggests that our subject’s ToM is a cognitive process based on the gameplay context. Our predictive models—using ranking support vector machines—corroborate these results, yielding moderately accurate predictors of players’ ToM.
- ZeitschriftenartikelInvestigating the Relationship Between Emotion Recognition Software and Usability Metrics(i-com: Vol. 19, No. 2, 2020) Schmidt, Thomas; Schlindwein, Miriam; Lichtner, Katharina; Wolff, ChristianDue to progress in affective computing, various forms of general purpose sentiment/emotion recognition software have become available. However, the application of such tools in usability engineering (UE) for measuring the emotional state of participants is rarely employed. We investigate if the application of sentiment/emotion recognition software is beneficial for gathering objective and intuitive data that can predict usability similar to traditional usability metrics. We present the results of a UE project examining this question for the three modalities text, speech and face. We perform a large scale usability test (N = 125) with a counterbalanced within-subject design with two websites of varying usability. We have identified a weak but significant correlation between text-based sentiment analysis on the text acquired via thinking aloud and SUS scores as well as a weak positive correlation between the proportion of neutrality in users’ voice and SUS scores. However, for the majority of the output of emotion recognition software, we could not find any significant results. Emotion metrics could not be used to successfully differentiate between two websites of varying usability. Regression models, either unimodal or multimodal could not predict usability metrics. We discuss reasons for these results and how to continue research with more sophisticated methods.
- ZeitschriftenartikelPositive Computing(Business & Information Systems Engineering: Vol. 57, No. 6, 2015) Pawlowski, Jan M.; Eimler, Sabrina C.; Jansen, Marc; Stoffregen, Julia; Geisler, Stefan; Koch, Oliver; Müller, Gordon; Handmann, Uwe
- KonferenzbeitragSmile to Me: Investigating Emotions and their Representation in Text-based Messaging in the Wild(Mensch und Computer 2019 - Tagungsband, 2019) Poguntke, Romina; Mantz, Tamara; Hassib, Mariam; Schmidt, Albrecht; Schneegaß, StefanEmotions are part of human communication shaping mimics and representing feelings. For this, conveying emotions has been integrated in text-based messaging applications using emojis. While visualizing emotions in text messages has been investigated in previous work, we studied the effects of emotion sharing by augmented the WhatsApp Web user interface – a text messenger people already use on daily basis. For this, we designed and developed four different visualizations to represent emotions detected through facial expression recognition of chat partners using a web cam. Investigating emotion representation and its effects, we conducted a four weeks longitudinal study with 28 participants being inquired via 48 semistructured interviews and 64 questionnaires. Our findings revealed that users want to maintain control over their emotions, particularly regarding sharing, and that they preferably view positive emotions avoiding unpleasant social situations. Based on these insights, we phrased four design recommendations stimulating novel approaches for augmenting chats.
- ZeitschriftenartikelSocial Signal Interpretation (SSI)(KI - Künstliche Intelligenz: Vol. 25, No. 3, 2011) Wagner, Johannes; Lingenfelser, Florian; Bee, Nikolaus; André, ElisabethThe development of anticipatory user interfaces is a key issue in human-centred computing. Building systems that allow humans to communicate with a machine in the same natural and intuitive way as they would with each other requires detection and interpretation of the user’s affective and social signals. These are expressed in various and often complementary ways, including gestures, speech, mimics etc. Implementing fast and robust recognition engines is not only a necessary, but also challenging task. In this article, we introduce our Social Signal Interpretation (SSI) tool, a framework dedicated to support the development of such online recognition systems. The paper at hand discusses the processing of four modalities, namely audio, video, gesture and biosignals, with focus on affect recognition, and explains various approaches to fuse the extracted information to a final decision.