Sridharan, MohanMeadows, Ben2021-04-232021-04-2320192019http://dx.doi.org/10.1007/s13218-019-00616-yhttps://dl.gi.de/handle/20.500.12116/36256This paper makes two contributions towards enabling a robot to provide explanatory descriptions of its decisions, the underlying knowledge and beliefs, and the experiences that informed these beliefs. First, we present a theory of explanations comprising (i) claims about representing, reasoning with, and learning domain knowledge to support the construction of explanations; (ii) three fundamental axes to characterize explanations; and (iii) a methodology for constructing these explanations. Second, we describe an architecture for robots that implements this theory and supports scalability to complex domains and explanations. We demonstrate the architecture’s capabilities in the context of a simulated robot (a) moving target objects to desired locations or people; or (b) following recipes to bake biscuits.ExplanationsHuman–robot collaborationNon-monotonic logical reasoningProbabilistic planningTowards a Theory of Explanations for Human–Robot CollaborationText/Journal Article10.1007/s13218-019-00616-y1610-1987