Auflistung nach Autor:in "Pfingsthorn, Max"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragLess is More! Support of Parallel and Time-critical Assembly Tasks with Augmented Reality(Mensch und Computer 2021 - Tagungsband, 2021) Illing, Jannike; Klinke, Philipp; Pfingsthorn, Max; Heuten, WilkoManual assembly tasks require workers to precisely assemble parts in 3D space in parallel steps. Often, additional time pressure further increases the complexity of these parallel tasks (e.g. in adhesive bonding processes). The performance in parallel tasks is heavily influenced by the capacity of the working memory, which is often overwhelmed if rules and states of too many tasks have to be remembered or perceived. Therefore, we propose to use Augmented Reality (AR) to investigate how visual assistance with parallel tasks can affect worker performance in time and spatial dependent process steps. In a user study, we compare three conditions: AR instructions presented (a) One task, (b) Two tasks, and (c) Four tasks per step on a tablet. For instructions we used selected work steps from a standardized adhesive bonding process as a representative for common time-critical assembly tasks. Our results show that instructions with multiple displayed tasks simultaneously per step can improve the process time but also increase the error rate and task load. The work instructions with less displayed tasks per work step showed better subjective results among participants, which may increase motivation, especially among less experienced workers. Our results help designers and developers to design assistance systems for time-critical and simultaneously executable assembly tasks, while considering process times, error rate and task load.
- KonferenzbeitragMind the ARm: realtime visualization of robot motion intent in head-mounted augmented reality(Mensch und Computer 2020 - Tagungsband, 2020) Gruenefeld, Uwe; Prädel, Lars; Illing, Jannike; Stratmann, Tim; Drolshagen, Sandra; Pfingsthorn, MaxEstablished safety sensor technology shuts down industrial robots when a collision is detected, causing preventable loss of productivity. To minimize downtime, we implemented three Augmented Reality (AR) visualizations (Path, Preview, and Volume) which allow humans to understand robot motion intent and give way to the robot. We compare the different visualizations in a user study in which a small cognitive task is performed in a shared workspace. We found that Preview and Path required significantly longer head rotations to perceive robot motion intent. Volume, however, required the shortest head rotation and was perceived as most safe, enabling closer proximity of the robot arm before one left the shared workspace without causing shutdowns.
- ZeitschriftenartikelSurface Representations for 3D Mapping(KI - Künstliche Intelligenz: Vol. 24, No. 3, 2010) Birk, Andreas; Pathak, Kaustubh; Vaskevicius, Narunas; Pfingsthorn, Max; Poppinga, Jann; Schwertfeger, SörenPoint clouds, i.e., sets of 3D coordinates of surface point samples from obstacles, are the predominant form of representation for 3D mapping. They are the raw data format of most 3D sensors and the basis for state of the art algorithms for 3D scan registration. It is argued here that point clouds have severe limitations and a case is made for a necessary paradigm shift to surface based representations. In addition to several conceptual arguments, it is shown how a surface based approach can be used for fast and robust registration of 3D data without the need for robot motion estimates from other sensors. Concretely, a short overview on own work dubbed 3D Plane SLAM is presented. It features an extraction of planes with uncertainties from 3D range scans. Two scans can then be registered by determining the correspondence set that maximizes the global rigid body motion constraint while finding the related optimal decoupled rotations and translations with their underlying uncertainties. The registered scans are embedded in pose-graph SLAM for loop closing and relaxation.