Auflistung nach Autor:in "Pustejovsky, James"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelDesigning a Uniform Meaning Representation for Natural Language Processing(KI - Künstliche Intelligenz: Vol. 35, No. 0, 2021) Van Gysel, Jens E. L.; Vigus, Meagan; Chun, Jayeol; Lai, Kenneth; Moeller, Sarah; Yao, Jiarui; O’Gorman, Tim; Cowell, Andrew; Croft, William; Huang, Chu-Ren; Hajič, Jan; Martin, James H.; Oepen, Stephan; Palmer, Martha; Pustejovsky, James; Vallejos, Rosa; Xue, NianwenIn this paper we present Uniform Meaning Representation (UMR), a meaning representation designed to annotate the semantic content of a text. UMR is primarily based on Abstract Meaning Representation (AMR), an annotation framework initially designed for English, but also draws from other meaning representations. UMR extends AMR to other languages, particularly morphologically complex, low-resource languages. UMR also adds features to AMR that are critical to semantic interpretation and enhances AMR by proposing a companion document-level representation that captures linguistic phenomena such as coreference as well as temporal and modal dependencies that potentially go beyond sentence boundaries.
- ZeitschriftenartikelEmbodied Human Computer Interaction(KI - Künstliche Intelligenz: Vol. 35, No. 0, 2021) Pustejovsky, James; Krishnaswamy, NikhilIn this paper, we argue that embodiment can play an important role in the design and modeling of systems developed for Human Computer Interaction. To this end, we describe a simulation platform for building Embodied Human Computer Interactions (EHCI). This system, VoxWorld, enables multimodal dialogue systems that communicate through language, gesture, action, facial expressions, and gaze tracking, in the context of task-oriented interactions. A multimodal simulation is an embodied 3D virtual realization of both the situational environment and the co-situated agents, as well as the most salient content denoted by communicative acts in a discourse. It is built on the modeling language VoxML (Pustejovsky and Krishnaswamy in VoxML: a visualization modeling language, proceedings of LREC, 2016), which encodes objects with rich semantic typing and action affordances, and actions themselves as multimodal programs, enabling contextually salient inferences and decisions in the environment. VoxWorld enables an embodied HCI by situating both human and artificial agents within the same virtual simulation environment, where they share perceptual and epistemic common ground. We discuss the formal and computational underpinnings of embodiment and common ground, how they interact and specify parameters of the interaction between humans and artificial agents, and demonstrate behaviors and types of interactions on different classes of artificial agents.