Auflistung nach Autor:in "Mayer, Sven"
1 - 10 von 14
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelAdapting visualizations and interfaces to the user(it - Information Technology: Vol. 64, No. 4-5, 2022) Chiossi, Francesco; Zagermann, Johannes; Karolus, Jakob; Rodrigues, Nils; Balestrucci, Priscilla; Weiskopf, Daniel; Ehinger, Benedikt; Feuchtner, Tiare; Reiterer, Harald; Chuang, Lewis L.; Ernst, Marc; Bulling, Andreas; Mayer, Sven; Schmidt, AlbrechtAdaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.
- KonferenzbeitragCobity: A Plug-And-Play Toolbox to Deliver Haptics in Virtual Reality(Mensch und Computer 2022 - Tagungsband, 2022) Villa, Steeven; Mayer, SvenHaptics increase the presence in virtual reality applications. However, providing room-scale haptics is an open challenge. Cobots (robotic systems that are safe for human use) are a promising approach, requiring in-depth engineering skills. Control is done on a low abstraction level and requires complex procedures and implementations. In contrast, 3D tools such as Unity allow to quickly prototype a wide range of environments for which cobots could deliver haptic feedback. To overcome this disconnect, we present Cobity, an open-source plug-and-play solution to control the cobot using the virtual environment, enabling fast prototyping of a wide range of haptic experiences. We present a Unity plugin that allows controlling the cobot using the end-effector’s target pose (cartesian position and angles); the values are then converted into velocities and streamed to the cobot inverse kinematic solver using a specially designed C++ library. Our results show that Cobity enables rapid prototyping with high precision for haptics. We argue that Cobity simplifies the creation of a wide range of haptic feedback applications enabling designers and researchers in human-computer interaction without robotics experience to quickly prototype virtual reality experiences with haptic sensations. We highlight this potential by presenting four different showcases.
- ZeitschriftenartikelComplementary interfaces for visual computing(it - Information Technology: Vol. 64, No. 4-5, 2022) Zagermann, Johannes; Hubenschmid, Sebastian; Balestrucci, Priscilla; Feuchtner, Tiare; Mayer, Sven; Ernst, Marc O.; Schmidt, Albrecht; Reiterer, HaraldWith increasing complexity in visual computing tasks, a single device may not be sufficient to adequately support the user’s workflow. Here, we can employ multi-device ecologies such as cross-device interaction, where a workflow can be split across multiple devices, each dedicated to a specific role. But what makes these multi-device ecologies compelling? Based on insights from our research, each device or interface component must contribute a complementary characteristic to increase the quality of interaction and further support users in their current activity. We establish the term complementary interfaces for such meaningful combinations of devices and modalities and provide an initial set of challenges. In addition, we demonstrate the value of complementarity with examples from within our own research.
- KonferenzbeitragA Design Space for User Interface Elements using Finger Orientation Input(Mensch und Computer 2021 - Tagungsband, 2021) Vogelsang, Jonas; Kiss, Francisco; Mayer, SvenDespite touchscreens being used by billions of people every day, today’s touch-based interactions are limited in their expressiveness as they mostly reduce the rich information of the finger down to a single 2D point. Researchers have proposed using finger orientation as input to overcome these limitations, adding two extra dimensions – the finger’s pitch and yaw angles. While finger orientation has been studied in-depth over the last decade, we describe an updated design space. Therefore, we present expert interviews combined with a literature review to describe the wide range of finger orientation input opportunities. First, we present a comprehensive set of finger orientation input enhanced user interface elements supported by expert interviews. Second, we extract design implications as a result of the additional input parameters. Finally, we introduce a design space for finger orientation input.
- KonferenzbeitragEyePointing: A Gaze-Based Selection Technique(Mensch und Computer 2019 - Tagungsband, 2019) Schweigert, Robin; Schwind, Valentin; Mayer, SvenInteracting with objects from a distance is not only challenging in the real world but also a common problem in virtual reality (VR). One issue concerns the distinction between attention for exploration and attention for selection - also known as the Midas-touch problem. Researchers proposed numerous approaches to overcome that challenge using additional devices, gaze input cascaded pointing, and using eye blinks to select the remote object. While techniques such as MAGIC pointing still require additional input for confirming a selection using eye gaze and, thus, forces the user to perform unnatural behavior, there is still no solution enabling a truly natural and unobtrusive device free interaction for selection. In this paper, we propose EyePointing: a technique which combines the MAGIC pointing technique and the referential mid-air pointing gesture to selecting objects in a distance. While the eye gaze is used for referencing the object, the pointing gesture is used as a trigger. Our technique counteracts the Midas-touch problem.
- KonferenzbeitragThe Human in the Infinite Loop: A Case Study on Revealing and Explaining Human-AI Interaction Loop Failures(Mensch und Computer 2022 - Tagungsband, 2022) Ou, Changkun; Buschek, Daniel; Mayer, Sven; Butz, AndreasInteractive AI systems increasingly employ a human-in-the-loop strategy. This creates new challenges for the HCI community when designing such systems. We reveal and investigate some of these challenges in a case study with an industry partner, and developed a prototype human-in-the-loop system for preference-guided 3D model processing. Two 3D artists used it in their daily work for 3 months. We found that the human-AI loop often did not converge towards a satisfactory result and designed a lab study (N=20) to investigate this further. We analyze interaction data and user feedback through the lens of theories of human judgment to explain the observed human-in-the-loop failures with two key insights: 1) optimization using preferential choices lacks mechanisms to deal with inconsistent and contradictory human judgments; 2) machine outcomes, in turn, influence future user inputs via heuristic biases and loss aversion. To mitigate these problems, we propose descriptive UI design guidelines. Our case study draws attention to challenging and practically relevant imperfections in human-AI loops that need to be considered when designing human-in-the-loop systems.
- KonferenzbeitragKnuckleTouch: Enabling Knuckle Gestures on Capacitive Touchscreens using Deep Learning(Mensch und Computer 2019 - Tagungsband, 2019) Schweigert, Robin; Leusmann, Jan; Hagenmayer, Simon; Weiß, Maximilian; Le, Huy Viet; Mayer, Sven; Bulling, AndreasWhile mobile devices have become essential for social communication and have paved the way for work on the go, their interactive capabilities are still limited to simple touch input. A promising enhancement for touch interaction is knuckle input but recognizing knuckle gestures robustly and accurately remains challenging. We present a method to differentiate between 17 finger and knuckle gestures based on a long short-term memory (LSTM) machine learning model. Furthermore, we introduce an open source approach that is ready-to-deploy on commodity touch-based devices. The model was trained on a new dataset that we collected in a mobile interaction study with 18 participants. We show that our method can achieve an accuracy of 86.8% on recognizing one of the 17 gestures and an accuracy of 94.6% to differentiate between finger and knuckle. In our evaluation study, we validate our models and found that the LSTM gestures recognizing archived an accuracy of 88.6%. We show that KnuckleTouch can be used to improve the input expressiveness and to provide shortcuts to frequently used functions.
- TextdokumentMensch und Computer 2022 - Tagungsband(Mensch und Computer 2022 - Tagungsband, 2022) Mühlhäuser, Max; Reuter, Christian; Pfleging, Bastian; Kosch, Thomas; Matviienko, Andrii; Gerling, Kathrin; Mayer, Sven; Heuten, Wilko; Döring, Tanja; Müller, Florian; Schmitz, Martin
- KonferenzbeitragSupporting Software Developers Through a Gaze-Based Adaptive IDE(Mensch und Computer 2023 - Tagungsband, 2023) Weber, Thomas; Thiel, Rafael Vinicius Mourao; Mayer, SvenHighly complex systems, such as software development tools, constantly gain features and, consequently, complexity and, thus, risk overwhelming or distracting the user. We argue that automation and adaptation could help users to focus on their work. However, the challenge is to correctly and promptly determine when to adapt what, as often the users’ intent is unclear. To assist software developers, we build a gaze-adaptive integrated development environment using the developers’ gaze as the source for learning appropriate adaptation. Beyond our experience of using gaze for an adaptive user interface, we also report first feedback from developers regarding the desirability of such a user interface, which indicated that adaptations for development tools need to strike a careful balance between automation and user control. Nonetheless, the developers see the value in a gaze-based adaptive user interface and how it could improve software development tools going forward.
- KonferenzbeitragA Survey of Natural Design for Interaction(Mensch und Computer 2022 - Tagungsband, 2022) Hirsch, Linda; Li, Jingyi; Mayer, Sven; Butz, AndreasThe term “Natural Design” has various meanings and applications within and beyond the human-computer interaction community. Yet, there is no consensus on whether it is a relevant design approach or only a descriptive term without profound meaning. We investigated the current understanding and design potential of “Natural Design” for interaction in a systematic literature review. By analyzing and rating 113 papers, we identified 47 relevant papers that applied Natural Design in different contexts. The understanding of the approach changes from nature-related inspirations to context-dependent naturalness based on increasing familiarity or expectations. We present a structured overview of these relevant papers, contribute a systematic Natural Design model for interaction and add 20 implications for applying Natural Design to natural user interfaces, natural interaction, or computation. We identified “Natural Design” as a relevant design approach to create intuitive and embedded interfaces that can profit from related concepts outside human-computer interaction.