Auflistung nach Autor:in "Chuang, Lewis L."
1 - 5 von 5
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelAdapting visualizations and interfaces to the user(it - Information Technology: Vol. 64, No. 4-5, 2022) Chiossi, Francesco; Zagermann, Johannes; Karolus, Jakob; Rodrigues, Nils; Balestrucci, Priscilla; Weiskopf, Daniel; Ehinger, Benedikt; Feuchtner, Tiare; Reiterer, Harald; Chuang, Lewis L.; Ernst, Marc; Bulling, Andreas; Mayer, Sven; Schmidt, AlbrechtAdaptive visualization and interfaces pervade our everyday tasks to improve interaction from the point of view of user performance and experience. This approach allows using several user inputs, whether physiological, behavioral, qualitative, or multimodal combinations, to enhance the interaction. Due to the multitude of approaches, we outline the current research trends of inputs used to adapt visualizations and user interfaces. Moreover, we discuss methodological approaches used in mixed reality, physiological computing, visual analytics, and proficiency-aware systems. With this work, we provide an overview of the current research in adaptive systems.
- KonferenzbeitragAuto-Generating Multimedia Language Learning Material for Children with Off-the-Shelf AI(Mensch und Computer 2022 - Tagungsband, 2022) Draxler, Fiona; Haller, Laura; Schmidt, Albrecht; Chuang, Lewis L.The unique affordances of mobile devices enable the design of novel language learning experiences with auto-generated learning materials. Thus, they can support independent learning without increasing the burden on teachers. In this paper, we investigate the potential and the design requirements of such learning experiences for children. We implement a novel mobile app that auto-generates context-based multimedia material for learning English. It automatically labels photos children take with the app and uses them as a trigger for generating content using machine translation, image retrieval, and text-to-speech. An exploratory study with 25 children showed that children were ready to engage to an equal extent with this app and a non-personal version using random instead of personal photos. Overall, the children appreciated the independence gained compared to learning at school but missed the teachers’ support. From a technological perspective, we found that auto-generation works in many cases. However, handling erroneous input, such as blurry images and spelling mistakes, is crucial for children as a target group. We conclude with design recommendations for future projects, including scaffolds for the photo-taking process and information redundancy for identifying inaccurate auto-generation results.
- WorkshopbeitragEinfluss von Ablenkung und Augenbewegungen auf Steuerungsaufgaben(Mensch & Computer 2012: interaktiv informiert – allgegenwärtig und allumfassend!?, 2012) Bieg, Hans-Joachim; Bülthoff, Heinrich H.; Chuang, Lewis L.In der vorliegenden Studie wurde der Einfluss visueller Ablenkung auf Steuerungsaufgaben untersucht. Die Ergebnisse deuten darauf hin, dass bereits eine kurze Verlagerung der Aufmerksamkeit und des Blicks mit einer systematischen Beeinflussung der Steuerungsaufgabe einhergeht. Im Gegenzug findet auch eine systematische Beeinflussung der Augenbewegungen durch die gleichzeitig durchgeführte Steuerungsaufgabe statt. Die Berücksichtigung solcher Interferenzen kann bei der Entwicklung von grafischen On-Board-Informationssystemen für Fahr- oder Flugzeuge von Nutzen sein.
- KonferenzbeitragAn Environment-Triggered Augmented-Reality Application for Learning Case Grammar(DELFI 2020 – Die 18. Fachtagung Bildungstechnologien der Gesellschaft für Informatik e.V., 2020) Draxler, Fiona; Wallwitz, Elena; Schmidt, Albrecht; Chuang, Lewis L.We present a handheld Augmented-Reality app that enables learners of German to study case grammar using real objects in their surroundings. The system can easily be integrated into the everyday life of busy learners and provides them with a means to study on their own. Specifically, the app detects objects in the learners’ surroundings and determines their spatial relationship. It then automatically generates quizzes to test dative and accusative cases. The app provides an example of how structural concepts of languages can be taught with AR.
- KonferenzbeitragIDeA: A Demonstration of a Mixed Reality System to Support Living with Central Field Loss(Mensch und Computer 2022 - Tagungsband, 2022) Lang, Florian; Grootjen, Jesse W.; Chuang, Lewis L.; Machulla, TonjaPeople with visual impairment face multiple challenges in their everyday life. They must regularly visit doctors to examine the progress of their impairment and advisory offices or local support groups to learn strategies to overcome challenges in everyday life. However, traveling to appropriate facilities itself often poses challenges to the patients. We propose IDeA, a system based on augmented and virtual reality technology for people with visual impairments. IDeA can lower the cost and improve access to medical care, support digitalization in treating visual impairments, and provide support in the everyday lives of people with visual impairments. IDeA is a three-fold system supporting: 1) the simulation of symptoms of visual impairments to raise awareness in persons without visual impairment, 2) showcasing early detection of symptoms and providing visual augmentations of the real world to support patients in overcoming challenges, and 3) supporting doctors through telemedical consultation, medical eye examinations, and the training of visual strategies. We outline the benefit for all stakeholders and how IDeA can improve the lives of people with visual impairment.