Auflistung Künstliche Intelligenz 33(3) - September 2019 nach Erscheinungsdatum
1 - 10 von 12
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelInterview with Professor Hector Levesque, University of Toronto(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019) Furbach, Ulrich
- ZeitschriftenartikelNews(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019)
- ZeitschriftenartikelCognitive Argumentation for Human Syllogistic Reasoning(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019) Saldanha, Emmanuelle-Anna Dietz; Kakas, AntonisThis paper brings together work from the psychology of reasoning and computational argumentation in AI to propose a cognitive computational model for human reasoning and in particular for human syllogistic reasoning. The model is grounded in the formal framework of argumentation in AI with its dialectic semantics for the quality of arguments. Arguments for logical conclusions are constructed via a set of proposed argument schemes, chosen for their cognitive validity, as supported by studies in cognitive psychology. The proposed model with its cognitive principles of argumentation can encompass together in a uniform way both formal and informal logical reasoning, capturing well the empirical data of human syllogistic reasoning in the recent Syllogism Challenge 2017 on cognitive modeling. The paper also argues that the proposed approach could be applied more generally to other forms of high-level human reasoning.
- ZeitschriftenartikelCognitive Reasoning: A Personal View(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019) Furbach, Ulrich; Hölldobler, Steffen; Ragni, Marco; Schon, Claudia; Stolzenburg, FriederThe adjective cognitive especially in conjunction with the word computing seems to be a trendy buzzword in the artificial intelligence community and beyond nowadays. However, the term is often used without explicit definition. Therefore we start with a brief review of the notion and define what we mean by cognitive reasoning . It shall refer to modeling the human ability to draw meaningful conclusions despite incomplete and inconsistent knowledge involving among others the representation of knowledge where all processes from the acquisition and update of knowledge to the derivation of conclusions must be implementable and executable on appropriate hardware. We briefly introduce relevant approaches and methods from cognitive modeling, commonsense reasoning, and subsymbolic approaches. Furthermore, challenges and important research questions are stated, e.g., developing a computational model that can compete with a (human) reasoner on problems that require common sense.
- ZeitschriftenartikelLearning Inference Rules from Data(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019) Sakama, Chiaki; Inoue, Katsumi; Ribeiro, TonyThis paper considers the possibility of designing AI that can learn logical or non-logical inference rules from data. We first provide an abstract framework for learning logics. In this framework, an agent $${{{\mathcal {A}}}}$$ A provides training examples that consist of formulas S and their logical consequences T . Then a machine $${{{\mathcal {M}}}}$$ M builds an axiomatic system that makes T a consequence of S . Alternatively, in the absence of an agent $$\mathcal{A}$$ A , a machine $${{{\mathcal {M}}}}$$ M seeks an unknown logic underlying given data. We next consider the problem of learning logical inference rules by induction. Given a set S of propositional formulas and their logical consequences T , the goal is to find deductive inference rules that produce T from S . We show that an induction algorithm LF1T , which learns logic programs from interpretation transitions, successfully produces deductive inference rules from input data. Finally, we consider the problem of learning non-logical inference rules. We address three case studies for learning abductive inference, frame axioms, and conversational implicature. Each case study uses machine learning techniques together with metalogic programming.
- ZeitschriftenartikelWhy Machines Don’t (yet) Reason Like People(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019) Khemlani, Sangeet; Johnson-Laird, P. N.AI has never come to grips with how human beings reason in daily life. Many automated theorem-proving technologies exist, but they cannot serve as a foundation for automated reasoning systems. In this paper, we trace their limitations back to two historical developments in AI: the motivation to establish automated theorem-provers for systems of mathematical logic, and the formulation of nonmonotonic systems of reasoning. We then describe why human reasoning cannot be simulated by current machine reasoning or deep learning methodologies. People can generate inferences on their own instead of just evaluating them. They use strategies and fallible shortcuts when they reason. The discovery of an inconsistency does not result in an explosion of inferences—instead, it often prompts reasoners to abandon a premise. And the connectives they use in natural language have different meanings than those in classical logic. Only recently have cognitive scientists begun to implement automated reasoning systems that reflect these human patterns of reasoning. A key constraint of these recent implementations is that they compute, not proofs or truth values, but possibilities.
- ZeitschriftenartikelThe CoRg Project: Cognitive Reasoning(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019) Schon, Claudia; Siebert, Sophie; Stolzenburg, FriederThe term cognitive computing refers to new hardware and/or software that mimics the functioning of the human brain. In the context of question answering and commonsense reasoning this means that the reasoning process of humans shall be modeled by adequate technical means. However, since humans do not follow the rules of classical logic, a system designed to model these abilities must be very versatile. The aim of the CoRg project (Cognitive Reasoning) is to successfully complete a reasoning task with commonsense reasoning. We address different benchmarks with focus on the COPA benchmark set (Choice of Plausible Alternatives). Since humans naturally use background knowledge, we have to deal with large background knowledge bases and must be able to reason with multiple input formats and sources in the CoRg system, in order to draw explainable conclusions. For this, we have to find appropriate logics for cognitive reasoning. For a successful reasoning system, nowadays it seems to be important to combine automated reasoning with machine learning technology like recurrent neural networks.
- ZeitschriftenartikelSemantics of Analogies from a Logical Perspective(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019) Abdelfattah, Ahmed M. H.; Krumnack, UlfA number of different approaches to model analogies and analogical reasoning in AI have been proposed, applying different knowledge representation and mapping strategies. Nevertheless, analogies still seem to be hard to grasp from a formal perspective, with no known treatment in the literature of, in particular, their formal semantics, though the empirical treatments involving human subjects are abundant. In this paper we present a framework that allows to analyze the syntax and the semantics of analogies in a universal logic-based setting without committing ourselves to a specific type of logic. We show that the syntactic process of analogy-making by finding a generalization can be given a sensible interpretation on the semantic level based on the theory of institutions . We then apply these ideas by considering a framework of analogy-making that is based on classical first-order logic.
- ZeitschriftenartikelBridging the Prototype Gap: On the Evolution of Ugly Ducklings(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019) Turhan, Anni-Yasmin
- ZeitschriftenartikelAccentuating Features of Description Logics in High-Level Interpretations of Hand-Drawn Sketches(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019) Abdelghaffar, Nashwa M.; Abdelfattah, Ahmed M. H.; Taha, Azza A.; Khamis, Soheir M.We propose an ontology-based approach to interpret hand-drawn sketches, originating from empirical results of experiments with human participants. The approach combines qualitative features of the sequence of sketch strokes with a high-level knowledge, and accentuates the potential effectiveness of interpretation via description logics. The results of an implementation, along with explanations, are presented to show how to extract the semantics of hand-drawn sketches of four object categories.