Ensuring a Robust Multimodal Conversational User Interface During Maintenance Work
dc.contributor.author | Fleiner, Christian | |
dc.contributor.author | Riedel, Till | |
dc.contributor.author | Beigl, Michael | |
dc.contributor.author | Ruoff, Marcel | |
dc.contributor.editor | Schneegass, Stefan | |
dc.contributor.editor | Pfleging, Bastian | |
dc.contributor.editor | Kern, Dagmar | |
dc.date.accessioned | 2021-09-03T19:10:17Z | |
dc.date.available | 2021-09-03T19:10:17Z | |
dc.date.issued | 2021 | |
dc.description.abstract | It has been shown that the provision of a conversational user interface proves beneficial in many domains. But, there are still many challenges when applied in production areas, e.g. as part of a virtual assistant to support workers in knowledge-intensive maintenance work. Regarding input modalities, touchscreens are failure-prone in wet environments and the quality of voice recognition is negatively affected by ambient noise. Augmenting a symmetric textand voice-based user interface with gestural input poses a good solution to provide both efficiency and a robust communication. This paper contributes to this research area by providing results on the application of appropriate head and one-hand gestures during maintenance work. We conducted an elicitation study with 20 participants and present a gesture set as its outcome. To facilitate the gesture development and integration for application designers, a classification model for head gestures and one for one-hand gestures were developed. Additionally, a proof-of-concept for operators’ acceptance regarding a multimodal conversational user interface with support of gestural input during maintenance work was demonstrated. It encompasses two usability testings with 18 participants in different realistic, but controlled settings: notebook repair (SUS: 82.1) and cutter head maintenance (SUS: 82.7). | en |
dc.description.uri | https://dl.acm.org/doi/10.1145/3473856.3473871 | en |
dc.identifier.doi | 10.1145/3473856.3473871 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/37250 | |
dc.language.iso | en | |
dc.publisher | ACM | |
dc.relation.ispartof | Mensch und Computer 2021 - Tagungsband | |
dc.relation.ispartofseries | Mensch und Computer | |
dc.subject | Assistance systems | |
dc.subject | conversational agent | |
dc.subject | elicitation study | |
dc.subject | industry 4.0 | |
dc.subject | multimodal interface | |
dc.subject | task guidance | |
dc.subject | user-defined gestures | |
dc.title | Ensuring a Robust Multimodal Conversational User Interface During Maintenance Work | en |
dc.type | Text/Conference Paper | |
gi.citation.endPage | 97 | |
gi.citation.publisherPlace | New York | |
gi.citation.startPage | 85 | |
gi.conference.date | 5.-8.. September 2021 | |
gi.conference.location | Ingolstadt | |
gi.conference.sessiontitle | MCI-SE02 | |
gi.document.quality | digidoc |