Ertl, DominikRaneburger, DavidHeiß, Hans-UlrichPepper, PeterSchlingloff, HolgerSchneider, Jörg2018-11-272018-11-272011978-88579-286-4https://dl.gi.de/handle/20.500.12116/18650Manual generation of vocal speech user interfaces requires effort in different disciplines. This includes designing the interaction between a user and the system, configuring toolkits for speech input and output of the user interface, and setting up a dialogue manager for the behavior of the user interface. It requires even more effort if the user interfaces have to be created for several platforms. We present an approach to semi-automatically generate command-based vocal speech user interfaces from an interaction model. We address user interfaces for dialogue-based interactive systems. The interaction is designed once and the user interface for vocal speech input and output can be generated for different platforms. The aim is that UI designers focus on high-level dialogue design between a human and a computer instead of the low-level vocal speech UI engineering.enTowards semi-automatic generation of vocal speech user interfaces from interaction modelsText/Conference Paper1617-5468