Kerzel, MatthiasAmbsdorf, JakobBecker, DennisLu, WenhaoStrahl, ErikSpisak, JosuaGäde, ConnorWeber, TomWermter, Stefan2023-01-182023-01-1820222022http://dx.doi.org/10.1007/s13218-022-00772-8https://dl.gi.de/handle/20.500.12116/40066Explainable AI has become an important field of research on neural machine learning models. However, most existing methods are designed as tools that provide expert users with additional insights into their models. In contrast, in human-robot interaction scenarios, non-expert users are frequently confronted with complex, embodied AI systems whose inner workings are unknown. Therefore, eXplainable Human-Robot Interaction (XHRI) should leverage the user’s intuitive ability to collaborate and to use efficient communication. Using NICO, the Neuro-Inspired COmpanion, as a use-case study, we propose an XHRI framework and show how different types of explanations enhance the interaction experience. These explanations range from (a) non-verbal cues for simple and intuitive feedback of inner states via (b) comprehensive verbal explanations of the robot’s intentions, knowledge and reasoning to (c) multimodal explanations using visualizations, speech and text. We revisit past HRI-related studies conducted with NICO and analyze them with the proposed framework. Furthermore, we present two novel XHRI approaches to extract suitable verbal and multimodal explanations from neural network modules in an HRI scenario.Explainable AI (XAI)Human–robot interaction Neuro-roboticsTrust in artificial intelligenceWhat’s on Your Mind, NICO?Text/Journal Article10.1007/s13218-022-00772-81610-1987