Auflistung nach Autor:in "Gruenefeld, Uwe"
1 - 10 von 12
Treffer pro Seite
Sortieroptionen
- WorkshopAI and Health: Using Digital Twins to Foster Healthy Behavior(Mensch und Computer 2024 - Workshopband, 2024) Keppel, Jonas; Ivezić, Dijana; Gruenefeld, Uwe; Lukowicz, Paul; Amft, Oliver; Schneegass, StefanThis workshop brings researchers together to discuss and explore how artificial intelligence (AI) can be used to improve general health. During our workshop at the MuC conference, we will focus on three main areas: developing ethical AI health recommendations, exploring how smart technologies in our homes can influence our health habits, and understanding how different types of feedback can change our health behaviors. The workshop aims to be a space where various research areas meet, encouraging a shared understanding and creating new ways to use AI to encourage healthy living. By focusing on real-world applications of AI and digital twins, we seek to guide our discussions toward strategies that have a direct and positive impact on individual and societal health.
- WorkshopbeitragDemystifying Deep Learning: A Learning Application for Beginners to Gain Practical Experience(Mensch und Computer 2020 - Workshopband, 2020) Schultze, Sven; Gruenefeld, Uwe; Boll, SusanneDeep learning has revolutionized machine learning, enhancing our ability to solve complex computational problems. From image classification to speech recognition, the technology can be beneficial in a broad range of scenarios. However, the barrier to entry is quite high, especially when programming skills are missing. In this paper, we present the development of a learning application for beginners that is easy to use, yet powerful enough to solve practical deep learning problems.We followed the human-centered design approach and conducted a technical evaluation to identify solvable classification problems. In the future, we plan to conduct a user study to evaluate our learning application online.
- ZeitschriftenartikelDemystifying Deep Learning: Developing and Evaluating a User-Centered Learning App for Beginners to Gain Practical Experience(i-com: Vol. 19, No. 3, 2021) Schultze, Sven; Gruenefeld, Uwe; Boll, SusanneDeep Learning has revolutionized Machine Learning, enhancing our ability to solve complex computational problems. From image classification to speech recognition, the technology can be beneficial in a broad range of scenarios. However, the barrier to entry is quite high, especially when programming skills are missing. In this paper, we present the development of a learning application that is easy to use, yet powerful enough to solve practical Deep Learning problems. We followed the human-centered design approach and conducted a technical evaluation to identify solvable classification problems. Afterwards, we conducted an online user evaluation to gain insights on users’ experience with the app, and to understand positive as well as negative aspects of our implemented concept. Our results show that participants liked using the app and found it useful, especially for beginners. Nonetheless, future iterations of the learning app should step-wise include more features to support advancing users.
- KonferenzbeitragEffective Visualization of Time-Critical Notifications in Virtual Reality(Mensch und Computer 2018 - Tagungsband, 2018) Gruenefeld, Uwe; Harre, Marie-Christin; Stratmann, Tim Claudius; Lüdtke, Andreas; Heuten, WilkoVirtual Reality (VR) devices empower users to be fully immersed into a virtual environment. However, time-critical notifications must be perceived as quickly and correctly as possible. Especially, if they indicate risk of injury (e.g., bumping into walls). Compared to displays used in previous work to investigate fast response times, immersion into a virtual environment, wider field of view and use of near-eye displays observed through lenses may have a considerable impact on the perception of time-critical notifications. Therefore, we studied the effectiveness of different visualization types (color, shape, size, text, number) in two different setups (room-scale, standing-only) with 20 participants in VR. Out study consisted of a part were we tested one notification and a part with multiple notifications showing up at the same time. We measured reaction time, correctness and subjective user evaluation. Our results showed that visualization types can be organized by consistent effectiveness ranking for different numbers of notification elements presented. Further, we offer promising recommendations regarding which visualization type to use in future VR applications for showing time-critical notifications.
- KonferenzbeitragEnabling Reusable Haptic Props for Virtual Reality by Hand Displacement(Mensch und Computer 2021 - Tagungsband, 2021) Auda, Jonas; Gruenefeld, Uwe; Schneegass, StefanVirtual Reality (VR) enables compelling visual experiences. However, providing haptic feedback is still challenging. Previous work suggests utilizing haptic props to overcome such limitations and presents evidence that props could function as a single haptic proxy for several virtual objects. In this work, we displace users’ hands to account for virtual objects that are smaller or larger. Hence, the used haptic prop can represent several differently-sized virtual objects. We conducted a user study (N = 12) and presented our participants with two tasks during which we continuously handed them the same haptic prop but they saw in VR differently-sized virtual objects. In the first task, we used a linear hand displacement and increased the size of the virtual object to understand when participants perceive a mismatch. In the second task, we compare the linear displacement to logarithmic and exponential displacements. We found that participants, on average, do not perceive the size mismatch for virtual objects up to 50% larger than the physical prop. However, we did not find any differences between the explored different displacement. We conclude our work with future research directions.
- KonferenzbeitragGive Weight to VR: Manipulating Users’ Perception of Weight in Virtual Reality with Electric Muscle Stimulation(Mensch und Computer 2022 - Tagungsband, 2022) Faltaous, Sarah; Prochazka, Marvin; Auda, Jonas; Keppel, Jonas; Wittig, Nick; Gruenefeld, Uwe; Schneegass, StefanVirtual Reality (VR) devices empower users to experience virtual worlds through rich visual and auditory sensations. However, believable haptic feedback that communicates the physical properties of virtual objects, such as their weight, is still unsolved in VR. The current trend towards hand tracking-based interactions, neglecting the typical controllers, further amplifies this problem. Hence, in this work, we investigate the combination of passive haptics and electric muscle stimulation to manipulate users’ perception of weight, and thus, simulate objects with different weights. In a laboratory user study, we investigate four differing electrode placements, stimulating different muscles, to determine which muscle results in the most potent perception of weight with the highest comfort. We found that actuating the biceps brachii or the triceps brachii muscles increased the weight perception of the users. Our findings lay the foundation for future investigations on weight perception in VR.
- KonferenzbeitragImproving Search Time Performance for Locating Out-of-View Objects(Mensch und Computer 2019 - Tagungsband, 2019) Gruenefeld, Uwe; Prädel, Lars; Heuten, WilkoLocating virtual objects (e.g., holograms) in head-mounted Augmented Reality (AR) can be an exhausting and frustrating task. This is mostly due to the limited field of view of current AR devices, which amplify the problem of objects receding from view. In previous work, EyeSee360 was developed to address this problem by visualizing the locations of multiple out-of-view objects. However, on small field of view devices such as the Hololens, EyeSee360 adds a lot of visual clutter that may negatively affect user performance. In this work, we compare three variants of EyeSee360 with different levels of information (assistance) to evaluate in how far they add visual clutter and thereby negatively affect search time performance. Our results show that variants of EyeSee360 with less assistance result into faster search times.
- WorkshopbeitragInvestigating the Challenges Facing Behavioral Biometrics in Everyday Life(Mensch und Computer 2022 - Workshopband, 2022) Saad, Alia; Gruenefeld, UweThe rapid progress of ubiquitous devices’ usage is faced with equally rapid progress of user-centered attacks. Researchers considered adopting different user identification methods, with more attention towards the implicit and continuous ones, to maintain the balance between usability and privacy. In this statement, we first discuss biometric-based solutions used to assure devices’ robustness against user-centered attacks, taking the inertial sensor-based gait identification for example. We finally discuss the challenges facing these solutions when integrated with everyday interactions.
- KonferenzbeitragMind the ARm: realtime visualization of robot motion intent in head-mounted augmented reality(Mensch und Computer 2020 - Tagungsband, 2020) Gruenefeld, Uwe; Prädel, Lars; Illing, Jannike; Stratmann, Tim; Drolshagen, Sandra; Pfingsthorn, MaxEstablished safety sensor technology shuts down industrial robots when a collision is detected, causing preventable loss of productivity. To minimize downtime, we implemented three Augmented Reality (AR) visualizations (Path, Preview, and Volume) which allow humans to understand robot motion intent and give way to the robot. We compare the different visualizations in a user study in which a small cognitive task is performed in a shared workspace. We found that Preview and Path required significantly longer head rotations to perceive robot motion intent. Volume, however, required the shortest head rotation and was perceived as most safe, enabling closer proximity of the robot arm before one left the shared workspace without causing shutdowns.
- KonferenzbeitragTowards a Universal Human-Computer Interaction Model for Multimodal Interactions(Mensch und Computer 2021 - Tagungsband, 2021) Faltaous, Sarah; Gruenefeld, Uwe; Schneegass, StefanModels in HCI describe and provide insights into how humans use interactive technology. They are used by engineers, designers, and developers to understand and formalize the interaction process. At the same time, novel interaction paradigms arise constantly introducing new ways of how interactive technology can support humans. In this work, we look into how these paradigms can be described using the classical HCI model introduced by Schomaker in 1995. We extend this model by presenting new relations that would provide a better understanding of them. For this, we revisit the existing interaction paradigms and try to describe their interaction using this model. The goal of this work is to highlight the need to adapt the models to newinteraction paradigms and spark discussion in the HCI community on this topic.