Stellmach, HannaLindner, Felix2019-04-172019-04-172019https://dl.gi.de/handle/20.500.12116/21839This study investigates the effect of uncertainty expressed by a robot facing a moral dilemma on humans’ moral judgment and impression formation. In two experiments, participants were shown a video of a robot explaining a moral dilemma and suggesting a decision to make. The robot either expressed certainty or uncertainty about the decision it suggests. Participants rated how much blame the robot deserves for its decision, the moral wrongness of the chosen action, and their impression of the robot in terms of four scale dimensions measuring social perception. The results suggest that the subpopulation of participants unfamiliar with the moral dilemma assigns significantly more blame to the uncertain robot as compared to the certain one, while expressed uncertainty has less effect on moral wrongness judgments. The second experiment suggests that higher blame ratings are mediated by the fact that the uncertain robot was perceived as more humanlike. We discuss implications of this result for the design of social robots.enMoral HRIConversational RobotUncertaintyPerception of an Uncertain Ethical Reasoning RobotText/Journal Article10.1515/icom-2019-00021618-162X