Auflistung nach Autor:in "Mayer, Sven"
1 - 10 von 20
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelAdapting visualizations and interfaces to the user(it - Information Technology: Vol. 64, No. 4-5, 2022) Chiossi, Francesco; Zagermann, Johannes; Karolus, Jakob; Rodrigues, Nils; Balestrucci, Priscilla; Weiskopf, Daniel; Ehinger, Benedikt; Feuchtner, Tiare; Reiterer, Harald; Chuang, Lewis L.; Ernst, Marc; Bulling, Andreas; Mayer, Sven; Schmidt, AlbrechtAdaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.
- KonferenzbeitragCobity: A Plug-And-Play Toolbox to Deliver Haptics in Virtual Reality(Mensch und Computer 2022 - Tagungsband, 2022) Villa, Steeven; Mayer, SvenHaptics increase the presence in virtual reality applications. However, providing room-scale haptics is an open challenge. Cobots (robotic systems that are safe for human use) are a promising approach, requiring in-depth engineering skills. Control is done on a low abstraction level and requires complex procedures and implementations. In contrast, 3D tools such as Unity allow to quickly prototype a wide range of environments for which cobots could deliver haptic feedback. To overcome this disconnect, we present Cobity, an open-source plug-and-play solution to control the cobot using the virtual environment, enabling fast prototyping of a wide range of haptic experiences. We present a Unity plugin that allows controlling the cobot using the end-effector’s target pose (cartesian position and angles); the values are then converted into velocities and streamed to the cobot inverse kinematic solver using a specially designed C++ library. Our results show that Cobity enables rapid prototyping with high precision for haptics. We argue that Cobity simplifies the creation of a wide range of haptic feedback applications enabling designers and researchers in human-computer interaction without robotics experience to quickly prototype virtual reality experiences with haptic sensations. We highlight this potential by presenting four different showcases.
- ZeitschriftenartikelComplementary interfaces for visual computing(it - Information Technology: Vol. 64, No. 4-5, 2022) Zagermann, Johannes; Hubenschmid, Sebastian; Balestrucci, Priscilla; Feuchtner, Tiare; Mayer, Sven; Ernst, Marc O.; Schmidt, Albrecht; Reiterer, HaraldWith increasing complexity in visual computing tasks, a single device may not be sufficient to adequately support the user’s workflow. Here, we can employ multi-device ecologies such as cross-device interaction, where a workflow can be split across multiple devices, each dedicated to a specific role. But what makes these multi-device ecologies compelling? Based on insights from our research, each device or interface component must contribute a complementary characteristic to increase the quality of interaction and further support users in their current activity. We establish the term complementary interfaces for such meaningful combinations of devices and modalities and provide an initial set of challenges. In addition, we demonstrate the value of complementarity with examples from within our own research.
- KonferenzbeitragCrossing Mixed Realities: A Review for Transitional Interfaces Design(Proceedings of Mensch und Computer 2024, 2024) Mayer, Elisabeth; Chiossi, Francesco; Mayer, SvenTransitioning seamlessly from the real world into the digital world through the mixed reality continuum remains challenging. This paper investigates transitional design principles across the MR spectrum, anchored by a review of “The MagicBook”, a pioneering work that introduced the concept of transitional interfaces to the HCI community. Employing a forward-backward method, we reviewed 309 publications to understand the landscape of MR transitions. Our analysis outlines four distinct transition types within MR environments, offering a novel classification scheme. From this literature corpus, we identify four categories, setting a foundation for UX evaluation of transitional interfaces.
- KonferenzbeitragA Design Space for User Interface Elements using Finger Orientation Input(Mensch und Computer 2021 - Tagungsband, 2021) Vogelsang, Jonas; Kiss, Francisco; Mayer, SvenDespite touchscreens being used by billions of people every day, today’s touch-based interactions are limited in their expressiveness as they mostly reduce the rich information of the finger down to a single 2D point. Researchers have proposed using finger orientation as input to overcome these limitations, adding two extra dimensions – the finger’s pitch and yaw angles. While finger orientation has been studied in-depth over the last decade, we describe an updated design space. Therefore, we present expert interviews combined with a literature review to describe the wide range of finger orientation input opportunities. First, we present a comprehensive set of finger orientation input enhanced user interface elements supported by expert interviews. Second, we extract design implications as a result of the additional input parameters. Finally, we introduce a design space for finger orientation input.
- KonferenzbeitragEyePointing: A Gaze-Based Selection Technique(Mensch und Computer 2019 - Tagungsband, 2019) Schweigert, Robin; Schwind, Valentin; Mayer, SvenInteracting with objects from a distance is not only challenging in the real world but also a common problem in virtual reality (VR). One issue concerns the distinction between attention for exploration and attention for selection - also known as the Midas-touch problem. Researchers proposed numerous approaches to overcome that challenge using additional devices, gaze input cascaded pointing, and using eye blinks to select the remote object. While techniques such as MAGIC pointing still require additional input for confirming a selection using eye gaze and, thus, forces the user to perform unnatural behavior, there is still no solution enabling a truly natural and unobtrusive device free interaction for selection. In this paper, we propose EyePointing: a technique which combines the MAGIC pointing technique and the referential mid-air pointing gesture to selecting objects in a distance. While the eye gaze is used for referencing the object, the pointing gesture is used as a trigger. Our technique counteracts the Midas-touch problem.
- KonferenzbeitragHands-free Selection in Scroll Lists for AR Devices(Proceedings of Mensch und Computer 2024, 2024) Drewes, Heiko; Fanger, Yara; Mayer, SvenWhile desktops and smartphones have established user interface standards, they are still lacking for virtual and augmented reality devices. Hands-free interaction for these devices is desirable. This paper explores utilizing eye and head tracking for interaction beyond buttons, in particular, selection in scroll lists. We conducted a user study with three different interaction methods based on eye and head movements, gaze-based dwell-time, gaze-head offset, and gaze-based head gestures and compared them with the state-of-the-art hand-based interaction. The study evaluation of quantitative and qualitative measurement provides insights into the trade-off between physical and mental demands for augmented reality interfaces.
- KonferenzbeitragThe Human in the Infinite Loop: A Case Study on Revealing and Explaining Human-AI Interaction Loop Failures(Mensch und Computer 2022 - Tagungsband, 2022) Ou, Changkun; Buschek, Daniel; Mayer, Sven; Butz, AndreasInteractive AI systems increasingly employ a human-in-the-loop strategy. This creates new challenges for the HCI community when designing such systems. We reveal and investigate some of these challenges in a case study with an industry partner, and developed a prototype human-in-the-loop system for preference-guided 3D model processing. Two 3D artists used it in their daily work for 3 months. We found that the human-AI loop often did not converge towards a satisfactory result and designed a lab study (N=20) to investigate this further. We analyze interaction data and user feedback through the lens of theories of human judgment to explain the observed human-in-the-loop failures with two key insights: 1) optimization using preferential choices lacks mechanisms to deal with inconsistent and contradictory human judgments; 2) machine outcomes, in turn, influence future user inputs via heuristic biases and loss aversion. To mitigate these problems, we propose descriptive UI design guidelines. Our case study draws attention to challenging and practically relevant imperfections in human-AI loops that need to be considered when designing human-in-the-loop systems.
- WorkshopbeitragIncreasing Large Language Models Context Awareness through Nonverbal Cues(Mensch und Computer 2024 - Workshopband, 2024) Schmidmaier, Matthias; Harrich, Cedrik; Mayer, SvenToday, interaction with LLM-based agents is mainly based on text or voice interaction. Currently, we explore how nonverbal cues and affective information can augment this interaction in order to create empathic, context-aware agents. For that, we extend user prompts with input from different modalities and varying levels of abstraction. In detail, we investigate the potential of extending the input into LLMs beyond text or voice, similar to human-human interaction in which humans not only rely on the simple text that was uttered by a conversion partner but also on nonverbal cues. As a result, we envision that cameras can pick up facial expressions from the user, which can then be fed into the LLM communication as an additional input channel fostering context awareness. In this work we introduce our application ideas and implementations, preliminary findings, and discuss arising challenges.
- KonferenzbeitragKnuckleTouch: Enabling Knuckle Gestures on Capacitive Touchscreens using Deep Learning(Mensch und Computer 2019 - Tagungsband, 2019) Schweigert, Robin; Leusmann, Jan; Hagenmayer, Simon; Weiß, Maximilian; Le, Huy Viet; Mayer, Sven; Bulling, AndreasWhile mobile devices have become essential for social communication and have paved the way for work on the go, their interactive capabilities are still limited to simple touch input. A promising enhancement for touch interaction is knuckle input but recognizing knuckle gestures robustly and accurately remains challenging. We present a method to differentiate between 17 finger and knuckle gestures based on a long short-term memory (LSTM) machine learning model. Furthermore, we introduce an open source approach that is ready-to-deploy on commodity touch-based devices. The model was trained on a new dataset that we collected in a mobile interaction study with 18 participants. We show that our method can achieve an accuracy of 86.8% on recognizing one of the 17 gestures and an accuracy of 94.6% to differentiate between finger and knuckle. In our evaluation study, we validate our models and found that the LSTM gestures recognizing archived an accuracy of 88.6%. We show that KnuckleTouch can be used to improve the input expressiveness and to provide shortcuts to frequently used functions.