Khemlani, SangeetJohnson-Laird, P. N.2021-04-232021-04-2320192019http://dx.doi.org/10.1007/s13218-019-00599-whttps://dl.gi.de/handle/20.500.12116/36239AI has never come to grips with how human beings reason in daily life. Many automated theorem-proving technologies exist, but they cannot serve as a foundation for automated reasoning systems. In this paper, we trace their limitations back to two historical developments in AI: the motivation to establish automated theorem-provers for systems of mathematical logic, and the formulation of nonmonotonic systems of reasoning. We then describe why human reasoning cannot be simulated by current machine reasoning or deep learning methodologies. People can generate inferences on their own instead of just evaluating them. They use strategies and fallible shortcuts when they reason. The discovery of an inconsistency does not result in an explosion of inferences—instead, it often prompts reasoners to abandon a premise. And the connectives they use in natural language have different meanings than those in classical logic. Only recently have cognitive scientists begun to implement automated reasoning systems that reflect these human patterns of reasoning. A key constraint of these recent implementations is that they compute, not proofs or truth values, but possibilities.Cognitive modelsMental modelsReasoningWhy Machines Don’t (yet) Reason Like PeopleText/Journal Article10.1007/s13218-019-00599-w1610-1987