Logo des Repositoriums
 

Künstliche Intelligenz 31(4) - November 2017

Autor*innen mit den meisten Dokumenten  

Auflistung nach:

Neueste Veröffentlichungen

1 - 10 von 11
  • Zeitschriftenartikel
    Automated interpretation of eye–hand coordination in mobile eye tracking recordings
    (KI - Künstliche Intelligenz: Vol. 31, No. 4, 2017) Mussgnug, Moritz; Singer, Daniel; Lohmeyer, Quentin; Meboldt, Mirko
    Mobile eye tracking is beneficial for the analysis of human–machine interactions of tangible products, as it tracks the eye movements reliably in natural environments, and it allows for insights into human behaviour and the associated cognitive processes. However, current methods require a manual screening of the video footage, which is time-consuming and subjective. This work aims to automatically detect cognitive demanding phases in mobile eye tracking recordings. The approach presented combines the user’s perception (gaze) and action (hand) to isolate demanding interactions based upon a multi-modal feature level fusion. It was validated in a usability study of a 3D printer with 40 participants by comparing the usability problems found to a thorough manual analysis. The new approach detected 17 out of 19 problems, while the time for manual analyses was reduced by 63%. More than eye tracking alone, adding the information of the hand enriches the insights into human behaviour. The field of AI could significantly advance our approach by improving the hand-tracking through region proposal CNNs, by detecting the parts of a product and mapping the demanding interactions to these parts, or even by a fully automated end-to-end detection of demanding interactions via deep learning. This could set the basis for machines providing real-time assistance to the machine’s users in cases where they are struggling.
  • Zeitschriftenartikel
    Semantic Interpretation of Multi-Modal Human-Behaviour Data
    (KI - Künstliche Intelligenz: Vol. 31, No. 4, 2017) Bhatt, Mehul; Kersting, Kristian
    This special issue presents interdisciplinary research—at the interface of artificial intelligence, cognitive science, and human-computer interaction—focussing on the semantic interpretation of human behaviour. The special issue constitutes an attempt to highlight and steer foundational methods research in artificial intelligence, in particular knowledge representation and reasoning, for the development of human-centred cognitive assistive technologies. Of specific interest and focus have been application outlets for basic research in knowledge representation and reasoning and computer vision for the cognitive, behavioural, and social sciences.
  • Zeitschriftenartikel
    Cognition, Interaction, Design
    (KI - Künstliche Intelligenz: Vol. 31, No. 4, 2017) Bhatt, Mehul; Cutting, James; Levin, Daniel; Lewis, Clayton
    This transcript documents select parts of discussions on the confluence of cognition, interaction, design, and human behaviour studies. The interview and related events were held as part of the CoDesign 2017 Roundtable (Bhatt in CoDesign 2017—The Bremen Summer of Cognition and Design/CoDesign Roundtable. University of Bremen, Bremen, 2017) at the University of Bremen (Germany) in June 2017. The Q/A sessions were moderated by Mehul Bhatt (University of Bremen, Germany., and Örebro University, Sweden) and Daniel Levin (Vanderbilt University, USA). Daniel Levin served in a dual role: as co-moderator of the discussion, as well as interviewee. The transcript is published as part of a KI Journal special issue on “Semantic Interpretation of Multi-Modal Human Behaviour Data” (Bhatt and Kersting in Special Issue on: Semantic Interpretation of Multimodal Human Behaviour Data, Artif Intell, 2017).
  • Zeitschriftenartikel
    Red Hen Lab: Dataset and Tools for Multimodal Human Communication Research
    (KI - Künstliche Intelligenz: Vol. 31, No. 4, 2017) Joo, Jungseock; Steen, Francis F.; Turner, Mark
    Researchers in the fields of AI and Communication both study human communication, but despite the opportunities for collaboration, they rarely interact. Red Hen Lab is dedicated to bringing them together for research on multimodal communication, using multidisciplinary teams working on vast ecologically-valid datasets. This article introduces Red Hen Lab with some possibilities for collaboration, demonstrating the utility of a variety of machine learning and AI-based tools and methods to fundamental research questions in multimodal human communication. Supplemental materials are at http://babylon.library.ucla.edu/redhen/KI.
  • Zeitschriftenartikel
    Assigning Group Activity Semantics to Multi-Device Mobile Sensor Data
    (KI - Künstliche Intelligenz: Vol. 31, No. 4, 2017) Loke, Seng W.; Abkenar, Amin Bakshandeh
    Numerous types of sensor data can be gathered via devices on mobile sensors, such as smartphones and smartwatches as well as things endowed with sensors. Such sensor data from disparate sources can be aggregated and inferences can be made about the user, the user’s physical activities as well as the physical activities of the group the user is part of. A perspective on this is that the group’s physical activity becomes an explanation for the sensor readings now obtained from this set of sensors. This paper proposes an explanation-based perspective on reasoning about multi-device sensor data, and describes a framework called GroupSense that prototypes this idea.
  • Zeitschriftenartikel
    Declarative Reasoning about Space and Motion with Video
    (KI - Künstliche Intelligenz: Vol. 31, No. 4, 2017) Suchan, Jakob
    We present a commonsense theory of space and motion for representing and reasoning about motion patterns in video data, to perform declarative (deep) semantic interpretation of visuo-spatial sensor data, e.g., coming from object tracking, eye tracking data, movement trajectories. The theory has been implemented within constraint logic programming to support integration into large scale AI projects. The theory is domain independent and has been applied in a range of domains, in which the capability to semantically interpret motion in visuo-spatial data is central. In this paper, we demonstrate its capabilities in the context of cognitive film studies for analysing visual perception of spectators by integrating the visual structure of a scene and spectators gaze acquired from eye tracking experiments.
  • Zeitschriftenartikel
    Automatic Detection of Visual Search for the Elderly using Eye and Head Tracking Data
    (KI - Künstliche Intelligenz: Vol. 31, No. 4, 2017) Dietz, Michael; Schork, Daniel; Damian, Ionut; Steinert, Anika; Haesner, Marten; André, Elisabeth
    With increasing age we often find ourselves in situations where we search for certain items, such as keys or wallets, but cannot remember where we left them before. Since finding these objects usually results in a lengthy and frustrating process, we propose an approach for the automatic detection of visual search for older adults to identify the point in time when the users need assistance. In order to collect the necessary sensor data for the recognition of visual search, we develop a completely mobile eye and head tracking device specifically tailored to the requirements of older adults. Using this device, we conduct a user study with 30 participants aged between 65 and 80 years ($$avg = 71.7,$$avg=71.7, 50% female) to collect training and test data. During the study, each participant is asked to perform several activities including the visual search for objects in a real-world setting. We use the recorded data to train a support vector machine (SVM) classifier and achieve a recognition rate of 97.55% with the leave-one-user-out evaluation method. The results indicate the feasibility of an approach towards the automatic detection of visual search in the wild.
  • Zeitschriftenartikel
    News
    (KI - Künstliche Intelligenz: Vol. 31, No. 4, 2017)
  • Zeitschriftenartikel
    Using Ontology-Based Data Access to Enable Context Recognition in the Presence of Incomplete Information
    (KI - Künstliche Intelligenz: Vol. 31, No. 4, 2017) Thost, Veronika
  • Zeitschriftenartikel
    AI, Kitsch, and Communication
    (KI - Künstliche Intelligenz: Vol. 31, No. 4, 2017) Hertzberg, Joachim