Fleiner, ChristianRiedel, TillBeigl, MichaelRuoff, MarcelSchneegass, StefanPfleging, BastianKern, Dagmar2021-09-032021-09-032021https://dl.gi.de/handle/20.500.12116/37250It has been shown that the provision of a conversational user interface proves beneficial in many domains. But, there are still many challenges when applied in production areas, e.g. as part of a virtual assistant to support workers in knowledge-intensive maintenance work. Regarding input modalities, touchscreens are failure-prone in wet environments and the quality of voice recognition is negatively affected by ambient noise. Augmenting a symmetric textand voice-based user interface with gestural input poses a good solution to provide both efficiency and a robust communication. This paper contributes to this research area by providing results on the application of appropriate head and one-hand gestures during maintenance work. We conducted an elicitation study with 20 participants and present a gesture set as its outcome. To facilitate the gesture development and integration for application designers, a classification model for head gestures and one for one-hand gestures were developed. Additionally, a proof-of-concept for operators’ acceptance regarding a multimodal conversational user interface with support of gestural input during maintenance work was demonstrated. It encompasses two usability testings with 18 participants in different realistic, but controlled settings: notebook repair (SUS: 82.1) and cutter head maintenance (SUS: 82.7).enAssistance systemsconversational agentelicitation studyindustry 4.0multimodal interfacetask guidanceuser-defined gesturesEnsuring a Robust Multimodal Conversational User Interface During Maintenance WorkText/Conference Paper10.1145/3473856.3473871