Auflistung Künstliche Intelligenz 27(2) - Mai 2013 nach Schlagwort "Common ground"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelOn Grounding Natural Kind Terms in Human-Robot Communication(KI - Künstliche Intelligenz: Vol. 27, No. 2, 2013) Peltason, Julia; Rieser, Hannes; Wachsmuth, Sven; Wrede, BrittaOur contribution situates Human-Robot Communication, especially the grounding of Natural Kind Terms, in the interface of Artificial Intelligence, Cognitive Psychology, Philosophy, Robotics and Semantics. We investigate whether a robot can be grounded in the sense favoured in Artificial Intelligence and Philosophy.We thus extend the notion of grounding to social symbol grounding using an interactive perspective addressing the question how grounding can be achieved in detail in interaction. For the acquisition of Natural Kind Terms we establish the notions of foundational common ground and foundational grounding in contrast to the established common ground and grounding. We introduce the robot setting used and provide a deep evaluation of a tutorial dialogue between a user and the robot. We investigate these Human-Robot Communication data from an ethno-methodological and an “omniscient” perspective (the latter amounting to consideration of automatic speech recognition results) and test whether these perspectives matter for analysing grounding. We show that the robot has acquired a partial concept of a Natural Kind Term—represented by statistics over visual object features—and that this is shared knowledge, hence the first step of a grounding sequence. Finally, we argue that grounding of robots can be achieved and extended to situated structures of considerable complexity.
- ZeitschriftenartikelSymbol Grounding as Social, Situated Construction of Meaning in Human-Robot Interaction(KI - Künstliche Intelligenz: Vol. 27, No. 2, 2013) Kruijff, Geert-Jan M.The paper views the issue of “symbol grounding” from the viewpoint of the construction of meaning between humans and robots, in the context of a collaborative activity. This concerns a core aspect of the formation of common ground: The construction of meaning between actors as a conceptual representation which is believed to be mutually understood as referring to a particular aspect of reality. The problem in this construction is that experience is inherently subjective—and more specifically, humans and robots experience and understand reality fundamentally differently. There is an inherent asymmetry between the actors involved. The paper focuses on how this asymmetry can be reflected logically, and particularly in the underlying model theory. The point is to make it possible for a robot to reason explicitly both about such asymmetry in understanding, consider possibilities for alignment to deal with it, and establish (from its viewpoint) a level of intersubjective or mutual understanding. Key to the approach taken in the paper is to consider conceptual representations to be formulas over propositions which are based in proofs, as reasoned explanations of experience. This shifts the focus from a notion of “truth” to a notion of judgment—judgments which can be subjectively right and still intersubjectively wrong (faultless disagreement), and which can evolve over time (updates, revision). The result is an approach which accommodates both asymmetric agency and social sentience, modelling symbol grounding in human-robot interaction as social, situated construction over time.