Auflistung nach Schlagwort "Human-Computer Interaction"
1 - 10 von 30
Treffer pro Seite
Sortieroptionen
- Workshop17th Workshop "Be-greifbare Interaktion"(Mensch und Computer 2024 - Workshopband, 2024) van Koningsbruggen, Rosa; Delgado Rodriguez, Sarah; Maierhöfer, Vitus; Waldschütz, Hannes; Youssef, Yara; Kullmann, Max; Nischwitz, LenaTangible interaction centers around the manipulation of physical objects and the usage of our bodies, thus involving the environment and physical context stronger than visual or speech-based interfaces. The wide range of possibilities for integrating sensors and computing systems into the physical environment provides ample design space. The research field of Tangible Interaction investigates this scope, to enable meaningful and human-oriented applications. In this workshop, the German Informatics Society (GI) specialist group “Be-greifbare Interaktion” of the Department of Human-Computer Interaction offers a forum for scientific discourse and interdisciplinary discussion. Contributions range from theoretical, critical and forward-looking reflections, to design work and practical implementations. This year’s theme is “Back to Basics”, exploring how tangibles can be used for learning and how to design for them. The workshop opens the discussion to a broader audience of experts and students, to disclose current developments and generate new impulses for the research field.
- KonferenzbeitragA Digital Application for Digital Natives to Improve Orientation Competence and Career Choice Decisions(Mensch und Computer 2022 - Tagungsband, 2022) Brandenburger, Jessica; Mergan, Hamid; Schametat, Jan; Vergin, Annika; Engel, Alexandra; Janneck, MoniqueMany young people find it difficult to choose a career. The range of options is diverse and often opaque. A digital application is being developed as part of the JOLanDA project - Improving the orientation skills of young people in rural regions in biographical decision-making processes - funded by the Federal Ministry of Education and Research. The aim is to improve the orientation of young people and to promote career choice decisions. Especially young people from rural regions are often under a double burden: they have to make a career choice decision and at the same time consider whether this profession can be practiced in their region. This is often associated with a migration decision for them (leave or stay?). By using various methods, such as gamification, digital natives who have grown up with technologies of the digital age [18] are to be prepared for the career orientation phase in a playful way. On the platform, young people go on a journey of discovery, they can start different expeditions and walk along discovery paths where they can find out what might suit them professionally and what makes them happy. The application aims to raise awareness of the dynamics of biographical decisions.
- WorkshopbeitragActual Versus Attributed Consciousness: Studying AI From a User Perspective(Mensch und Computer 2024 - Workshopband, 2024) Hein, Ilka; Ullrich, Daniel; Diefenbach, SarahBesides questions relating to technical implementations, the research field of consciousness in AI should focus on users’ perceptions of conscious technologies. From a UX perspective, not only the actual capabilities of a technology are decisive, but also what users see in it. Moreover, it is important to consider if consciousness attributions to AI are desirable. The position paper highlights upsides and downsides of such attributions and comes to the conclusion that a quandary evolves, which is described using the example of a therapy robot.
- DissertationAffective automotive user interfaces(2020) Braun, MichaelTechnological progress in the fields of ubiquitous sensing and machine learning has been fueling the development of user-aware human-computer interaction in recent years. Especially natural user interfaces, like digital voice assistants, can benefit from understanding their users in order to provide a more naturalistic experience. Such systems can, for example, detect the emotional state of users and accordingly act in an empathic way. One major research field working on this topic is Affective Computing, where psycho-physiological measures, speech input, and facial expressions are used to sense human emotions. Affective data allows natural user interfaces to respond to emotions, providing promising perspectives not only for user experience design but also for safety aspects. In automotive environments, informed estimations of the driver’s state can potentially avoid dangerous errors and evoking positive emotions can improve the experience of driving. This dissertation explores Affective Automotive User Interfaces using two basic interaction paradigms: firstly, emotion regulation systems react to the current emotional state of the user based on live sensing data, allowing for quick interventions. Secondly, emotional interaction synthesizes experiences which resonate with the user on an emotional level. The constituted goals of these two interaction approaches are the promotion of safe behavior and an improvement of user experience. Promoting safe behavior through emotion regulation: Systems which detect and react to the driver’s state are expected to have great potential for improving road safety. This work presents a model and methods needed to investigate such systems and an exploration of several approaches to keep the driver in a safe state. The presented methods include techniques to induce emotions and to sample the emotional state of drivers. Three driving simulator studies investigate the impacts of emotionaware interventions in the form of implicit cues, visual mirroring and empathic speech synthesis. We envision emotion-awareness as a safety feature which can detect if a driver is unfit or in need of support, based on the propagation of robust emotion detection technology. Improving user experience with emotional interaction: Emotional perception is an essential part of user experience. This thesis entails methods to build emotional experiences derived from a variety of lab and simulator studies, expert feedback, car-storming sessions and design thinking workshops. Systems capable of adapting to the user’s preferences and traits in order to create an emotionally satisfactory user experience do not require the input of emotion detection. They rather create value through general knowledge about the user by adapting the output they generate. During this research, cultural and generational influences became evident, which have to be considered when implementing affective automotive user interfaces in future cars. We argue that the future of user-aware interaction lies in adapting not only to the driver’s preferences and settings but also to their current state. This paves the way for the regulation of safe behavior, especially in safety-critical environments like cars, and an improvement of the driving experience.
- KonferenzbeitragAge-Related Differences in Preferences for Using Voice Assistants(Mensch und Computer 2021 - Tagungsband, 2021) Gollasch, David; Weber, GerhardIn the past fewyears, voice assistants have become broadly available in different forms of presentation and devices – not only as a personal assistant within smartphones but as smart speakers, within TV sets or as part of an in-car infotainment system. Furthermore, we live in an ageing society and considering elderly people as users of voice assistants gains more relevance driven by both trends. The goal of this study is to identify the specific age-related preferences of older people when using a conversational user interface in form of a voice assistant. We conducted a survey based on 26 elderlyrelated communication strategies among participants of different age. The participants had to evaluate the strategies according to their own preferences for using voice assistants. As a result, we identified 11 preferences specific to older users. Surprisingly, most of the communication strategies, when applied to voice assistants, seem to be relevant for users of all ages, and a few of the communication strategies do not apply when used in voice assistants. The preferences specific to older people help to develop new guidelines for voice user interfaces or conversational user interfaces in general. They do not automatically lead to those guidelines but provide a foundation to derive requirements, develop guidelines and evaluate those guidelines by means of user-based usability tests.
- DissertationBehaviour-aware mobile touch interfaces(2018) Buschek, DanielMobile touch devices have become ubiquitous everyday tools for communication, information, as well as capturing, storing and accessing personal data. They are often seen as personal devices, linked to individual users, who access the digital part of their daily lives via hand-held touchscreens. This personal use and the importance of the touch interface motivate the main assertion of this thesis: Mobile touch interaction can be improved by enabling user interfaces to assess and take into account how the user performs these interactions. This thesis introduces the new term "behaviour-aware" to characterise such interfaces. These behaviour-aware interfaces aim to improve interaction by utilising behaviour data: Since users perform touch interactions for their main tasks anyway, inferring extra information from said touches may, for example, save users' time and reduce distraction, compared to explicitly asking them for this information (e.g. user identity, hand posture, further context). Behaviour-aware user interfaces may utilise this information in different ways, in particular to adapt to users and contexts. Important questions for this research thus concern understanding behaviour details and influences, modelling said behaviour, and inference and (re)action integrated into the user interface. In several studies covering both analyses of basic touch behaviour and a set of specific prototype applications, this thesis addresses these questions and explores three application areas and goals: 1) Enhancing input capabilities – by modelling users' individual touch targeting behaviour to correct future touches and increase touch accuracy. The research reveals challenges and opportunities of behaviour variability arising from factors including target location, size and shape, hand and finger, stylus use, mobility, and device size. The work further informs modelling and inference based on targeting data, and presents approaches for simulating touch targeting behaviour and detecting behaviour changes. 2) Facilitating privacy and security – by observing touch targeting and typing behaviour patterns to implicitly verify user identity or distinguish multiple users during use. The research shows and addresses mobile-specific challenges, in particular changing hand postures. It also reveals that touch targeting characteristics provide useful biometric value both in the lab as well as in everyday typing. Influences of common evaluation assumptions are assessed and discussed as well. 3) Increasing expressiveness – by enabling interfaces to pass on behaviour variability from input to output space, studied with a keyboard that dynamically alters the font based on current typing behaviour. Results show that with these fonts users can distinguish basic contexts as well as individuals. They also explicitly control font influences for personal communication with creative effects. This thesis further contributes concepts and implemented tools for collecting touch behaviour data, analysing and modelling touch behaviour, and creating behaviour-aware and adaptive mobile touch interfaces. Together, these contributions support researchers and developers in investigating and building such user interfaces. Overall, this research shows how variability in mobile touch behaviour can be addressed and exploited for the benefit of the users. The thesis further discusses opportunities for transfer and reuse of touch behaviour models and information across applications and devices, for example to address tradeoffs of privacy/security and usability. Finally, the work concludes by reflecting on the general role of behaviour-aware user interfaces, proposing to view them as a way of embedding expectations about user input into interactive artefacts.
- KonferenzbeitragComparative Evaluation of Gesture and Touch Input for Medical Software(Mensch und Computer 2015 – Proceedings, 2015) Saalfeld, Patrick; Mewes, André; Luz, Maria; Preim, Bernhard; Hansen, ChristianThe interaction with medical software during interventions challenges physicians due to the limited space and the necessary sterility. Current input modalities such as touch screen control present a direct, natural interaction which addresses usability aspects but do not consider these challenges. A promising input modality is freehand gesture interaction, which allows sterile input and a possibly larger interaction space. This work compares gesture and touch input regarding task duration to perform typical intervention tasks and intuitiveness. A user study with ten medical students shows mostly significantly better results for touch screen interaction. Despite the advantages of freehand gestures, it is debatable whether these can compensate the better efficiency and usability results of touch screen interaction in the operating room.
- WorkshopbeitragCYWARN: Strategy and Technology Development for Cross-Platform Cyber Situational Awareness and Actor-Specific Cyber Threat Communication(Mensch und Computer 2021 - Workshopband, 2021) Kaufhold, Marc-André; Fromm, Jennifer; Riebe, Thea; Mirbabaie, Milad; Kühn, Philipp; Basyurt, Ali Sercan; Bayer, Markus; Stöttinger, Marc; Eyilmez, Kaan; Möller, Reinhard; Fuchß, Christoph; Stieglitz, Stefan; Reuter, ChristianDespite the merits of digitisation in private and professional spaces, critical infrastructures and societies are increasingly ex-posed to cyberattacks. Thus, Computer Emergency Response Teams (CERTs) are deployed in many countries and organisations to enhance the preventive and reactive capabilities against cyberattacks. However, their tasks are getting more complex by the increasing amount and varying quality of information dissem-inated into public channels. Adopting the perspectives of Crisis Informatics and safety-critical Human-Computer Interaction (HCI) and based on both a narrative literature review and group discussions, this paper first outlines the research agenda of the CYWARN project, which seeks to design strategies and technolo-gies for cross-platform cyber situational awareness and actor-spe-cific cyber threat communication. Second, it identifies and elabo-rates eight research challenges with regard to the monitoring, analysis and communication of cyber threats in CERTs, which serve as a starting point for in-depth research within the project.
- KonferenzbeitragDesigning VUIs for Social Assistance Robots for People with Dementia(Mensch und Computer 2021 - Tagungsband, 2021) Striegl, Julian; Gollasch, David; Loitsch, Claudia; Weber, GerhardElderly people and especially people with dementia often experience social isolation and need assistance while performing activities of daily living. We investigate a novel approach to cope with this problem by integrating voice assistants and social assistance robots. Due to the special communication needs of people with mild cognitive impairment, the design of interfaces of such systems is to be based on the particular requirements of the target user group. This paper investigates, how a voice user interface should be designed for elderly users with mild cognitive impairment – such as an early stage of dementia – to provide personalised support throughout activities of daily living. A context and user analysis delivered a set of 11 guidelines for voice user interfaces for people with dementia. For a pilot study we selected those strategies often applied by caregivers in their communication with people with dementia and evaluated the voice user interface among elderly participants and healthcare workers who reported a high feasibility, usefulness and acceptance of the designed system.
- TextdokumentEvaluation of Motion-based Touch-typing Biometrics in Online Financial Environments(BIOSIG 2017, 2017) Buriro,Attaullah; Gupta,Sandeep; Crispo,BrunoThis paper presents a bimodal scheme, the mechanism which contemplates the way a user enters an 8-digit PIN/password and the phone-movements while doing so, for user authentication in mobile banking/financial applications (apps). The scheme authenticates the user based on the timing differences of the entered strokes. Additionally, it enhances the security by introducing a transparent layer utilizing the phone-movements made by the user. The scheme is assumed to be highly secure as mimicking the invisible touch-timings and the phone-movements could be extremely onerous. Our analysis is based on 2850 samples collected from 95 users through a 3-day unsupervised field experiment and using 3 multi-class classifiers. Random Forest (RF) classifier out-performed other two classifiers and provided a True Acceptance Rate (TAR) of 96%.
- «
- 1 (current)
- 2
- 3
- »