Auflistung nach Autor:in "Schweigert, Robin"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragEyePointing: A Gaze-Based Selection Technique(Mensch und Computer 2019 - Tagungsband, 2019) Schweigert, Robin; Schwind, Valentin; Mayer, SvenInteracting with objects from a distance is not only challenging in the real world but also a common problem in virtual reality (VR). One issue concerns the distinction between attention for exploration and attention for selection - also known as the Midas-touch problem. Researchers proposed numerous approaches to overcome that challenge using additional devices, gaze input cascaded pointing, and using eye blinks to select the remote object. While techniques such as MAGIC pointing still require additional input for confirming a selection using eye gaze and, thus, forces the user to perform unnatural behavior, there is still no solution enabling a truly natural and unobtrusive device free interaction for selection. In this paper, we propose EyePointing: a technique which combines the MAGIC pointing technique and the referential mid-air pointing gesture to selecting objects in a distance. While the eye gaze is used for referencing the object, the pointing gesture is used as a trigger. Our technique counteracts the Midas-touch problem.
- KonferenzbeitragKnuckleTouch: Enabling Knuckle Gestures on Capacitive Touchscreens using Deep Learning(Mensch und Computer 2019 - Tagungsband, 2019) Schweigert, Robin; Leusmann, Jan; Hagenmayer, Simon; Weiß, Maximilian; Le, Huy Viet; Mayer, Sven; Bulling, AndreasWhile mobile devices have become essential for social communication and have paved the way for work on the go, their interactive capabilities are still limited to simple touch input. A promising enhancement for touch interaction is knuckle input but recognizing knuckle gestures robustly and accurately remains challenging. We present a method to differentiate between 17 finger and knuckle gestures based on a long short-term memory (LSTM) machine learning model. Furthermore, we introduce an open source approach that is ready-to-deploy on commodity touch-based devices. The model was trained on a new dataset that we collected in a mobile interaction study with 18 participants. We show that our method can achieve an accuracy of 86.8% on recognizing one of the 17 gestures and an accuracy of 94.6% to differentiate between finger and knuckle. In our evaluation study, we validate our models and found that the LSTM gestures recognizing archived an accuracy of 88.6%. We show that KnuckleTouch can be used to improve the input expressiveness and to provide shortcuts to frequently used functions.