MCI Dissertationen
In diesem Bereich sammeln wir (Verweise auf) Dissertationen zu MCI-Themen. Ausser dem Kriterium, dass die Dissertation erfolgreich abgeschlossen worden sein soll, gibt es keine weiteren Kriterien für die Aufnahme in dieses Verzeichnis (also keine Mindestnote wie summa oder magna). Wenn eine Arbeit in Bereich Mensch-Computer-Interaktion fällt und Sie die Arbeit wert finden, hier aufgenommen zu werden, dann bitte einfach formlos an michael.koch@unibw.de melden.
Autor*innen mit den meisten Dokumenten
Neueste Veröffentlichungen
- DissertationFeedback und Anreize für die Nutzung von Web 2.0 Diensten(2013) Mazarakis, AthanasiosIn der vorliegenden Arbeit wurden Feedbackmechanismen, die teils bereits erprobt waren und teils neu entwickelt wurden, auf ihre Wirksamkeit zur Motivationssteigerung in Vorlesungswikis, untersucht. Feedbackmechanismen werden hier als systemneutrales Feedback definiert, welches anhand vergangener Nutzeraktivität automatisch generiert wird und keine subjektive Wertung enthält. Zusätzlich wurden Persönlichkeitsausprägungen und Umgebungsfaktoren untersucht.
- DissertationPatterns of Practice - Interdisciplinary Negotiation of Cultural Complexity through Practice-Based Methods in Informatics(2022) Heidt, Michael B.Following the principle of knowing through making, this thesis discusses development and application of a practice-based methodology for construction of digital artefacts within cultural contexts. It addresses the epistemological diversity and complexity inhering within interdisciplinary projects, suggesting methodological devices able to navigate the variegated disciplinary landscape present within respective development projects. The conceptual pair complexity/complication acts as theoretical point of reference in order to frame mediations between the formal material of computer code and physically embodied practice in exhibition spaces. Inquiries conducted unfold poietically, in the mode of concrete construction of interactive artefacts. Interactive biographies, tangible tabletops, and collage generators are among the devices developed and deployed. Digital materiality emerges as a key category during the research process, pointing towards productive ambivalences at play within joint practices of digital making.
- DissertationTowards a Mobile Office: User Interfaces For Safety and Productivity in Conditionally Automated Vehicles(2021) Schartmüller, ClemensThe widespread implementation of mobile personal computing devices like notebooks and smartphones has changed knowledge work towards more mobility beyond the traditional office desk. Rising levels of driving automation on the road may initiate a similar shift. By changing the driver’s role to that of the driver-passenger, the demand for so-called Non-Driving Related Tasks (NDRTs) grows. For example, commuters could use their time on the road to prepare for the upcoming office day, or truck drivers could do logistics planning between on- and offloading. However, driver-passengers still have the responsibility to stay ready to respond to Take-Over Requests (TORs). They occur when a not-yet fully automated vehicle experiences a system failure or functional limitation. Accordingly, in this thesis, we investigate the concept of a mobile office in a Conditionally Automated Driving (SAE L3) vehicle. Its goals are to enable productive NDRT engagement during automated driving phases but also safe manual driving after TORs. Therefore, user interfaces that face these challenges for the typical office tasks of text entry and comprehension in SAE L3 vehicles are developed and evaluated. They account for both office work and TOR/driving ergonomics issues based on the user-centered design process. The designs are informed by standards, applied Human-Computer Interaction (HCI) research literature, and cognitive resource and multitasking theories. Mixed-methods user studies with medium- to high-fidelity prototypes allowed quantitatively and qualitatively assessing the interfaces and their features regarding users’ objective and subjective performance with them and physiological responses to them. Thereby, we inferred generalizable results on the design features, underlying theories, and the methods used to design and evaluate them. We found that merging knowledge from various areas of HCI can promote safety and productivity of office work in SAE L3 vehicles to some extent when iteratively improving interface designs. Furthermore, the mixed-methods evaluations revealed detailed aspects of applying prevalent HCI theory and applied research findings in a novel and complex domain. Overall, we report findings on various mobile office interface modalities and combinations concerning their impact on ergonomics factors such as performance, workload, situational awareness, and well-being. Additionally, we detail the methodological approach taken, including the infrastructure required to implement it.
- DissertationAugmented Reality Windshield Displays for Automated Driving(2022) Riegler, AndreasThe advancement of automated driving technology promises a magnitude of benefits for society and individuals, such as increased safety, improved traffic efficiency, or mobility for the impaired. Still, the potentially greatest benefit for the individual is the possibility to engage in non-driving related tasks (NDRTs), such as office work (e.g., writing emails, performing video calls and chats) and entertainment activities (e.g., gaming, watching a movie). The novel opportunity to engage in NDRTs could be achieved by utilizing novel display interfaces such as windshield displays (WSDs), enabled with augmented reality visualization capabilities for assisting the driver with depth perception. A windshield display provides a larger display space as compared to head-up displays (HUDs), and covers the driver's field of view through the vehicle's windscreen. However, the different levels of vehicle automation must be considered when new driver-vehicle cooperation interfaces are introduced. In SAE level 3, drivers must be prepared to resume control of the vehicle at any time, and on short notice, while in SAE level 5, the vehicle is able to perform the entire dynamic driving task without any need of a human operator. This issues a number of challenges to automotive user interfaces (AUIs), such as increased workload or stress resulting from frequent task modality switching. For SAE level 3 and higher, WSDs can support the driver in performing NDRTs using visualization techniques with the aim that distraction becomes engagement. Therefore, new AUIs must be introduced and evaluated, or existing AUIs must be adapted to increase user experience (UX) in automated vehicles while maintaining driver situation awareness. Improved safety for vulnerable road users or drivers with cognitive impairments can be achieved with WSDs by showing potential hazards directly in the driver's field of view. However, little research has been conducted on how potential users would use these displays, which information they desire, or where content should be located, and how to safely interact with this user interface in an automated vehicle. Therefore, in our research, we explore WSD content visualization and placement techniques as well as interaction modalities to support the driver in NDRTs engagements to improve safety and in-vehicle user experiences. Our user-centered design process encompassed the development of a virtual reality driving simulator for conducting safe and visually immersive user studies. Our results highlight the potential of WSDs for automated driving, driver/passenger preference for personalized content presentations, novel interactions as well as safety considerations when performing visually and cognitively demanding NDRTs.
- DissertationInclusive Human-Machine Interfaces to Increase Perceived Security in Shared Automated Vehicles(2024) Schuß, MartinaShared automated vehicles (SAVs; SAE level 5) are expected to be beneficial for environment and society, while facing substantial challenges regarding acceptance, human factors, and user experience, with perceived security as the foremost concern. In addressing these hurdles, it is crucial to engage users from the early stages of research by applying the human-centered design (HCD) process to improve perceived security in and consequently the adoption of SAVs. Currently, research and development excludes certain demographics resulting in a data bias and emphasizing the necessity of inclusive research involving diverse groups. This thesis argues for employing suitable methodologies to engage with a broad spectrum of individuals during research processes while focusing on the demographics age, gender, and cultural background. We adopt feminist HCI and HCD to explore the potential of human-machine interfaces (HMIs) to mitigate perceived security in SAVs using straightforward methods. We conceptualize and implement HMI concepts and evaluate them using a mixed-methods approach—combining simulated study settings implemented in virtual reality (VR) or using video-based prototyping with questionnaires and interviews—assessing the advantages of in-vehicle HMIs for SAVs for increasing perceived security. Our findings contribute to the field of HCI research in the area of SAVs with concrete design and interaction solutions. We introduce digital companions for SAVs, discuss solutions and their potential negative side-effects through a feminist HCI lens, and present a validated scale to effectively measure perceived security within SAVs. The results serve as a guidance for researchers and practitioners to develop inclusive and appropriate interfaces for SAVs
- DissertationAutomated Driving: Towards Trustworthy and Safe Human-Machine Cooperation(2020) Wintersberger, PhilippAutomated vehicles are gradually entering the market and the technology promises to increase road safety and comfort, amongst other advantages. An important construct guiding humans' interaction with safety-critical systems is trust, which is especially relevant as most drivers are consumers rather than domain experts, such as pilots in aviation. The successful introduction of automated vehicles on the market requires to raise the trust of technology skeptics, but at the same time prevent overtrust. Overtrust is already suspected of having contributed to a couple of - even fatal - accidents with existing driving automation systems. Consequently, there is a need to investigate the topic of trust in the context of automated vehicles and design systems which maintain safety by preventing both distrust and overtrust, a process also called "trust calibration". As the possibility to engage in non-driving related tasks is an important consumer desire, this work proposes to consider drivers' multitasking demands already in the vehicle design process to prevent emerging trust issues. Therefore, a framework integrating theoretical considerations from the domains of trust, human-machine cooperation, and multitasking is proposed. By aligning overall goals between the operator and the system whilst supporting drivers in tasks at the strategical, tactical, and operation level of control, a more trustworthy cooperation should be achieved. A series of studies was conducted to identify important dimensions of trust in driving automation as well as scenarios leading to distrust and overtrust. Those scenarios were then used to demonstrate how the structured approach provided by the framework allows for designing in-vehicle interfaces. Three interaction concepts aiming to support drivers in the different levels of automation were designed and evaluated in driving simulator studies. Results highlight the potential of multimodal as well as attentive user interfaces (interruption management) to deal with overtrust, and augmented reality visualizations to raise acceptance of drivers distrusting the automation. All approaches confirmed to improve the subjective trust of the operator and demonstrate the structured approach provided by the framework can assist to design more trustworthy in-vehicle interfaces, which is important for a successful and safe implementation of driving automation systems.
- DissertationThe DAUX Framework: A Need-Centered Development Approach for User Experience in Driving Automation(2020) Frison, Anna-KatharinaThe individual and societal benefits of driving automation can only unfold if the underlying technology is established on the market. As user acceptance is dependent on users' experience with a technology, i.e. user experience (UX), novel user interfaces (UIs) need to be developed to balance drawbacks of individual automation levels (SAE J3016). Therefore, the predominant innovation- and technology-centered perspective has to be supplemented by a user-centered approach. As a solution, the "DAUX Framework", as part of a need-centered development approach, is proposed. The framework offers guidelines how to a) identify relevant needs for hypothesis/ UI concept development and b) evaluate UX by triangulating behavioral, product-, and experience-oriented methods. To derive recommendations for UI development, the introduced approach is applied in three case studies. Thereby, example UIs for different levels of automation are developed (SAE L2, L3, and L4/5) and then evaluated in a high-fidelity driving simulator. Results about partial driving automation (SAE L2) imply, all properties of an automated vehicle, also usability and aesthetics of an embedded UI, wrongly impact drivers' fulfillment of the need of security. Hence, the current system performance must always be transparent. A safe trip is the basis of positive driving experiences. Further, skipping the launch of conditional driving automation (SAE L3) is not only justifiable from safety, but also from experiential perspective. Results show that due to users' needs for autonomy, competence, and security, the mere possibility of a take-over-request at any time negatively impacts the whole journey experience. At high and full driving automation (SAE L4/5), users worry about their needs of competence, autonomy, and the meaning of driving interactions, e.g., accelerating. Although engaging in non-driving related tasks might balance these problems, there will still be users who appreciate the joy of driving. Hence, optional control should always be offered. The "DAUX Framework", as part of a need-centered development approach, has been applied in different use cases and has proven to be a valid and useful approach for developing UIs to improve UX for driving automation. Consequently, this PhD work supports - by appropriate design and development of UIs - the individual and societal acceptance of the technology of driving automation. This work lays the foundation that promised advantages can be realized.
- DissertationTowards a Human-Robot Interaction Design for People with Motor Disabilities by Enhancing the Visual Space(2024) Arévalo Arboleda, StephaniePeople with motor disabilities experience several physical limitations that affect not only their activities of daily living but their integration into the labor market. Human-Robot Collaboration presents opportunities to enhance human capabilities and counters physical limitations through different interaction paradigms and technological devices. However, little is known about the needs, expectations, and perspectives of people with motor disabilities within a human-robot collaborative work environment. In this thesis, we aim to shed light on the perspectives of people with motor disabilities when designing a teleoperation concept that could enable them to perform manipulation tasks in a manufacturing environment. First, we provide the concerns of different people with motor disabilities, social workers, and caregivers about including a collaborative robotic arm in assembly lines. Second, we identify specific opportunities and potential challenges in hands-free interaction design for robot control. Third, we present a multimodal hands-free interaction for robot control that uses augmented reality to display the user interface. On top of that, we propose a feedback concept that provides augmented visual cues to aid robot operators in gaining a better perception of the location of the objects in the workspace and improve performance in pick-and-place tasks. We present our contributions through six studies with people with and without disabilities, and the empirical findings are reported in eight publications. Publications I, II, and IV aim to extend the research efforts of designing human-robot collaborative spaces for people with motor disabilities. Publication III sheds a light on the reasoning for hands-free modality choices and Publication VIII evaluates a hands-free teleoperation concept with an individual with motor disabilities. Publications V - VIII explore augmented reality to present a user interface that facilitates hands-free robot control and uses augmented visual cues to address depth perception issues improving thus performance in pick-and-place tasks. Our findings can be summarized as follows. We point out concerns grouped into three themes: the robot fitting in the social and organizational structure, human-robot synergy, and human-robot problem management. Additionally, we provide five lessons learned derived from the pragmatic use of participatory design for people with motor disabilities, (1) approach participants through different channels and allow for multidisciplinarity in the research team, (2) consider the relationship between social dependencies in the selection of a participatory design technique, (3) plan for early exposure to robots and other technology, (4) take into account all opinions in design sessions, and (5) acknowledge that ethical implications go beyond consent. Also, we introduce findings about the nature of modality choices in hands-free interaction, which point to the user’s own abilities and individual experiences as determining factors in interaction evaluation. Finally, we present and evaluate a possible hands-free multimodal interaction design for robot control using augmented reality and augmented visual cues. We propose that augmented visual cues can improve depth perception and performance in pick-and-place tasks. Thus, we evaluated our designs of visual cues by taking into account depth-related variables (target’s distance and pose) and subjective certainty. Our results highlight that shorter distances and a clear pose lead to higher success, faster grasping time, and higher certainty. In addition, we re-designed our augmented visual cues considering visualization techniques and monocular cues that could be used to enhance the visual space for robot teleoperation. Our results demonstrate that our augmented visual cues can assist robot control and increase accuracy in pick-and-place tasks. In conclusion, our findings on people with motor disabilities in a human-robot collaborative workplace, a hands-free multimodal interaction design, and augmented visual cues can extend the knowledge about using mixed reality in human-robot interaction. Further, these contributions have the potential to promote future research to design inclusive environments for people with disabilities.
- DissertationAn Interaction Design for AI-enhanced Assistive Human-Robot Collaboration(2024) Pascher, MaxThe global population of individuals with motor impairments faces substantial challenges, including reduced mobility, social exclusion, and increased caregiver dependency. While advances in assistive technologies can augment human capabilities, independence, and overall well-being by alleviating caregiver fatigue and care receiver weariness, target user involvement regarding their needs and lived experiences in the ideation, development, and evaluation process is often neglected. Further, current interaction design concepts often prove unsatisfactory, posing challenges to user autonomy and system usability, hence resulting in additional stress for end users. Here, the advantages of Artificial Intelligence (AI) can enhance accessibility of assistive technology. As such, a notable research gap exists in the development and evaluation of interaction design concepts for AI-enhanced assistive robotics. This thesis addresses the gap by streamlining the development and evaluation of shared control approaches while enhancing user integration through three key contributions. Firstly, it identifies user needs for assistive technologies and explores concepts related to robot motion intent communication. Secondly, it introduces the innovative shared control approach Adaptive DoF Mapping Control (ADMC), which generates mappings of a robot’s Degrees-of-Freedom (DoFs) based on situational Human-Robot Interaction (HRI) tasks and suggests them to users. Thirdly, it presents and evaluates the Extended Reality (XR) framework AdaptiX for in-silico development and evaluation of multi-modal interaction designs and feedback methods for shared control applications. In contrast to existing goal-oriented shared control approaches, my work highlights the development of a novel concept that does not rely on computing trajectories for known movement goals. Instead of pre-determined goals, ADMC utilises its inherent rule engine – for example, a Convolutional Neural Network (CNN), the robot arm’s posture, and a colour-and-depth camera feed of the robot’s gripper surroundings. This approach facilitates a more flexible and situationally aware shared control system. The evaluations within this thesis demonstrate that the ADMC approach significantly reduces task completion time, average number of necessary switches between DoF mappings, and perceived workload of users, compared to a non-adaptive input method utilising cardinal DoFs. Further, the effectiveness of AdaptiX for evaluations in-silico as well as real-world scenarios has been shown in one remote and two laboratory user studies. The thesis emphasises the transformative impact of assistive technologies for individuals with motor impairments, stressing the importance of user-centred design and legible AI-enhanced shared control applications, as well as the benefits of in-silico testing. Further, it also outlines future research opportunities with a focus on refining communication methods, extending the application of approaches like ADMC, and enhancing tools like AdaptiX to accommodate diverse tasks and scenarios. Addressing these challenges can further advance AI-enhanced assistive robotics, promoting the full inclusion of individuals with physical impairments in social and professional spheres.
- DissertationNomadic virtual reality : overcoming challenges of mobile virtual reality head-mounted displays(2020) Gugenheimer, JanTechnological advancements in the fields of optics, display technology and miniaturization have enabled high-quality virtual reality (VR) head-mounted displays (HMDs) to be used beyond research labs and become available as consumer products. In turn, this development enabled mobile VR HMDs, which are untethered and self-contained headsets, allowing users to immerse themselves wherever and whenever they wish. This creates a novel interaction scenario in which a user is immersed in a virtual environment using a mobile VR HMD inside an unknown context (e.g., watching a 360-degree video while commuting by public transport). This thesis defines this novel interaction scenario as nomadic VR and systematically explores its upcoming challenges and opportunities. For this, the interaction scenario is embedded into a larger vision of ubiquitous mixed reality, using models and approaches from the field of context-aware computing which already explain a similar transformation and paradigm shift from stationary PCs to mobile computing (smartphones): The form factor changed dramatically, cursor-based input was replaced with multi-touch, sound and visual feedback was extended with vibration and the constant changing environment enabled a variety of location-based features and services. We argue that a similar transformation will happen from stationary VR HMDs to mobile VR HMDs: the input will be adapted, novel output modalities will be added and the context of use will be incorporated into the virtual environment. This dissertation consists of six case studies, each addressing one aspect of these challenges (input, output and context). To enable fast and precise input we present FaceTouch, a novel interaction concept leveraging the backside of the HMD as a touch-sensitive surface. FaceTouch allows the user to select virtual content inside the nomadic VR interaction scenario without the need for additional accessories or expansive gestures. To extend the output capabilities of mobile VR HMDs, we propose GyroVR, a set of HMD-attached flywheels, leveraging the gyroscopic effect of resistance when changing the spinning axis of rotation and generating the perception of inertia. GyroVR was designed as a mobile and ungrounded feedback device fitting into the nomadic VR interaction scenario. The context was divided into the physical environment and human factors. With CarVR, we explored how to enable the usage of VR HMDs inside of moving vehicles such as cars. The CarVR system senses and incorporates the additional motions arising inside of these dynamic physical environments, enabling an increment of enjoyment and reduction of simulator sickness compared to a stationary setup. The SwiVRChair system presents a motorized office chair, exploring how everyday objects inside a static physical environment can be incorporated into the nomadic VR interaction scenario to enhance the overall user experience. Since the nomadic VR interaction scenario often takes place inside of public environments, for the human factor context we focused on social scenarios in which people use VR HMDs when people without HMDs (non-HMD users) are in the vicinity. With the ShareVR system, we present a prototype which uses floor projection and mobile displays combined with positional tracking to visualize the virtual world to (non-HMD) users and enable an asymmetric interaction. In a followup case study, we adapted the ShareVR concept to fit into a mobile VR HMD. FaceDisplay is a modified VR HMD that consists of three touch-sensitive displays and a depth camera attached to the back of the HMD, allowing the non-HMD user to perceive and interact with the virtual world through touch or gestures. We conclude this dissertation with three overarching findings that resulted not out of the individual research questions but emerged throughout the whole process of this thesis: (1) We argue that current HMDs are mainly optimized for the wearer and ignore the whole social context; future HMDs have to be designed to be able to include non-HMD users. (2) We show that the physical environment should not only be seen as a challenge, but can be leveraged to reduce problems such as simulator sickness and increase immersion. (3) We propose that similar to the very first smartphone, current HMDs should be seen as an unfinished device type. We argue for an engineering research approach that extends the current form factor through novel sensors and actuators.