Auflistung nach Autor:in "Wermter, Stefan"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelDevelopmental Robotics and its Role Towards Artificial General Intelligence(KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Eppe, Manfred; Wermter, Stefan; Hafner, Verena V.; Nagai, Yukie
- ZeitschriftenartikelSensorimotor Representation Learning for an “Active Self” in Robots: A Model Survey(KI - Künstliche Intelligenz: Vol. 35, No. 1, 2021) Nguyen, Phuong D. H.; Georgie, Yasmin Kim; Kayhan, Ezgi; Eppe, Manfred; Hafner, Verena Vanessa; Wermter, StefanSafe human-robot interactions require robots to be able to learn how to behave appropriately in spaces populated by people and thus to cope with the challenges posed by our dynamic and unstructured environment, rather than being provided a rigid set of rules for operations. In humans, these capabilities are thought to be related to our ability to perceive our body in space, sensing the location of our limbs during movement, being aware of other objects and agents, and controlling our body parts to interact with them intentionally. Toward the next generation of robots with bio-inspired capacities, in this paper, we first review the developmental processes of underlying mechanisms of these abilities: The sensory representations of body schema, peripersonal space, and the active self in humans. Second, we provide a survey of robotics models of these sensory representations and robotics models of the self; and we compare these models with the human counterparts. Finally, we analyze what is missing from these robotics models and propose a theoretical computational framework, which aims to allow the emergence of the sense of self in artificial agents by developing sensory representations through self-exploration.
- ZeitschriftenartikelWhat’s on Your Mind, NICO?(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Kerzel, Matthias; Ambsdorf, Jakob; Becker, Dennis; Lu, Wenhao; Strahl, Erik; Spisak, Josua; Gäde, Connor; Weber, Tom; Wermter, StefanExplainable AI has become an important field of research on neural machine learning models. However, most existing methods are designed as tools that provide expert users with additional insights into their models. In contrast, in human-robot interaction scenarios, non-expert users are frequently confronted with complex, embodied AI systems whose inner workings are unknown. Therefore, eXplainable Human-Robot Interaction (XHRI) should leverage the user’s intuitive ability to collaborate and to use efficient communication. Using NICO, the Neuro-Inspired COmpanion, as a use-case study, we propose an XHRI framework and show how different types of explanations enhance the interaction experience. These explanations range from (a) non-verbal cues for simple and intuitive feedback of inner states via (b) comprehensive verbal explanations of the robot’s intentions, knowledge and reasoning to (c) multimodal explanations using visualizations, speech and text. We revisit past HRI-related studies conducted with NICO and analyze them with the proposed framework. Furthermore, we present two novel XHRI approaches to extract suitable verbal and multimodal explanations from neural network modules in an HRI scenario.