Logo des Repositoriums
 

Towards a Theory of Explanations for Human–Robot Collaboration

dc.contributor.authorSridharan, Mohan
dc.contributor.authorMeadows, Ben
dc.date.accessioned2021-04-23T09:28:38Z
dc.date.available2021-04-23T09:28:38Z
dc.date.issued2019
dc.description.abstractThis paper makes two contributions towards enabling a robot to provide explanatory descriptions of its decisions, the underlying knowledge and beliefs, and the experiences that informed these beliefs. First, we present a theory of explanations comprising (i) claims about representing, reasoning with, and learning domain knowledge to support the construction of explanations; (ii) three fundamental axes to characterize explanations; and (iii) a methodology for constructing these explanations. Second, we describe an architecture for robots that implements this theory and supports scalability to complex domains and explanations. We demonstrate the architecture’s capabilities in the context of a simulated robot (a) moving target objects to desired locations or people; or (b) following recipes to bake biscuits.de
dc.identifier.doi10.1007/s13218-019-00616-y
dc.identifier.pissn1610-1987
dc.identifier.urihttp://dx.doi.org/10.1007/s13218-019-00616-y
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/36256
dc.publisherSpringer
dc.relation.ispartofKI - Künstliche Intelligenz: Vol. 33, No. 4
dc.relation.ispartofseriesKI - Künstliche Intelligenz
dc.subjectExplanations
dc.subjectHuman–robot collaboration
dc.subjectNon-monotonic logical reasoning
dc.subjectProbabilistic planning
dc.titleTowards a Theory of Explanations for Human–Robot Collaborationde
dc.typeText/Journal Article
gi.citation.endPage342
gi.citation.startPage331

Dateien