Logo des Repositoriums
 

Why Machines Don’t (yet) Reason Like People

dc.contributor.authorKhemlani, Sangeet
dc.contributor.authorJohnson-Laird, P. N.
dc.date.accessioned2021-04-23T09:27:08Z
dc.date.available2021-04-23T09:27:08Z
dc.date.issued2019
dc.description.abstractAI has never come to grips with how human beings reason in daily life. Many automated theorem-proving technologies exist, but they cannot serve as a foundation for automated reasoning systems. In this paper, we trace their limitations back to two historical developments in AI: the motivation to establish automated theorem-provers for systems of mathematical logic, and the formulation of nonmonotonic systems of reasoning. We then describe why human reasoning cannot be simulated by current machine reasoning or deep learning methodologies. People can generate inferences on their own instead of just evaluating them. They use strategies and fallible shortcuts when they reason. The discovery of an inconsistency does not result in an explosion of inferences—instead, it often prompts reasoners to abandon a premise. And the connectives they use in natural language have different meanings than those in classical logic. Only recently have cognitive scientists begun to implement automated reasoning systems that reflect these human patterns of reasoning. A key constraint of these recent implementations is that they compute, not proofs or truth values, but possibilities.de
dc.identifier.doi10.1007/s13218-019-00599-w
dc.identifier.pissn1610-1987
dc.identifier.urihttp://dx.doi.org/10.1007/s13218-019-00599-w
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/36239
dc.publisherSpringer
dc.relation.ispartofKI - Künstliche Intelligenz: Vol. 33, No. 3
dc.relation.ispartofseriesKI - Künstliche Intelligenz
dc.subjectCognitive models
dc.subjectMental models
dc.subjectReasoning
dc.titleWhy Machines Don’t (yet) Reason Like Peoplede
dc.typeText/Journal Article
gi.citation.endPage228
gi.citation.startPage219

Dateien