Auflistung nach Schlagwort "Reasoning"
1 - 7 von 7
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelCognitive Reasoning: A Personal View(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019) Furbach, Ulrich; Hölldobler, Steffen; Ragni, Marco; Schon, Claudia; Stolzenburg, FriederThe adjective cognitive especially in conjunction with the word computing seems to be a trendy buzzword in the artificial intelligence community and beyond nowadays. However, the term is often used without explicit definition. Therefore we start with a brief review of the notion and define what we mean by cognitive reasoning . It shall refer to modeling the human ability to draw meaningful conclusions despite incomplete and inconsistent knowledge involving among others the representation of knowledge where all processes from the acquisition and update of knowledge to the derivation of conclusions must be implementable and executable on appropriate hardware. We briefly introduce relevant approaches and methods from cognitive modeling, commonsense reasoning, and subsymbolic approaches. Furthermore, challenges and important research questions are stated, e.g., developing a computational model that can compete with a (human) reasoner on problems that require common sense.
- ZeitschriftenartikelCompanion-Technology for Cognitive Technical Systems(KI - Künstliche Intelligenz: Vol. 30, No. 1, 2016) Biundo, Susanne; Wendemuth, AndreasWe introduce the Transregional Collaborative Research Centre “Companion-Technology for Cognitive Technical Systems”—a cross-disciplinary endeavor towards the development of an enabling technology for Companion-systems. These systems completely adjust their functionality and service to the individual user. They comply with his or her capabilities, preferences, requirements, and current needs and adapt to the individual’s emotional state and ambient conditions. Companion-like behavior of technical systems is achieved through the investigation and implementation of cognitive abilities and their well-orchestrated interplay.
- ZeitschriftenartikelError-Tolerance and Error Management in Lightweight Description Logics(KI - Künstliche Intelligenz: Vol. 34, No. 4, 2020) Peñaloza, RafaelThe construction and maintenance of ontologies is an error-prone task. As such, it is not uncommon to detect unwanted or erroneous consequences in large-scale ontologies which are already deployed in production. While waiting for a corrected version, these ontologies should still be available for use in a “safe” manner, which avoids the known errors. At the same time, the knowledge engineer in charge of producing the new version requires support to explore only the potentially problematic axioms, and reduce the number of exploration steps. In this paper, we explore the problem of deriving meaningful consequences from ontologies which contain known errors. Our work extends the ideas from inconsistency-tolerant reasoning to allow for arbitrary entailments as errors, and allows for any part of the ontology (be it the terminological elements or the facts) to be the causes of the error. Our study shows that, with a few exceptions, tasks related to this kind of reasoning are intractable in general, even for very inexpressive description logics.
- ZeitschriftenartikelHigher-Level Cognition and Computation: A Survey(KI - Künstliche Intelligenz: Vol. 29, No. 3, 2015) Ragni, Marco; Stolzenburg, FriederHigher-level cognition is one of the constituents of our human mental abilities and subsumes reasoning, planning, language understanding and processing, and problem solving. A deeper understanding can lead to core insights to human cognition and to improve cognitive systems. There is, however, so far no unique characterization of the processes of human cognition. This survey introduces different approaches from cognitive architectures, artificial neural networks, and Bayesian modeling from a modeling perspective to vibrant fields such as connecting neurobiological processes with computational processes of reasoning, frameworks of rationality, and non-monotonic logics and common-sense reasoning. The survey ends with a set of five core challenges and open questions relevant for future research.
- ZeitschriftenartikelSemantic Technologies for Situation Awareness(KI - Künstliche Intelligenz: Vol. 34, No. 4, 2020) Baader, Franz; Borgwardt, Stefan; Koopmann, Patrick; Thost, Veronika; Turhan, Anni-YasminThe project “Semantic Technologies for Situation Awareness” was concerned with detecting certain critical situations from data obtained by observing a complex hard- and software system, in order to trigger actions that allow this system to save energy. The general idea was to formalize situations as ontology-mediated queries, but in order to express the relevant situations, both the employed ontology language and the query language had to be extended. In this paper we sketch the general approach and then concentrate on reporting the formal results obtained for reasoning in these extensions, but do not describe the application that triggered these extensions in detail.
- ZeitschriftenartikelThe Tweety Library Collection for Logical Aspects of Artificial Intelligence and Knowledge Representation(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Thimm, MatthiasTweety is a collection of Java libraries that provides a general interface layer for doing research in and working with different knowledge representation formalisms such as classical logics, conditional logics, probabilistic logics, and computational argumentation. It is designed in such a way that tasks like representing and reasoning with knowledge bases inside the programming environment are realizable in a common manner. Furthermore, Tweety contains libraries for dealing with agents, multi-agent systems, and dialog systems for agents, as well as belief revision, preference reasoning, preference aggregation, and action languages. A series of utility libraries that deal with e. g. mathematical optimization complement the collection.
- ZeitschriftenartikelWhy Machines Don’t (yet) Reason Like People(KI - Künstliche Intelligenz: Vol. 33, No. 3, 2019) Khemlani, Sangeet; Johnson-Laird, P. N.AI has never come to grips with how human beings reason in daily life. Many automated theorem-proving technologies exist, but they cannot serve as a foundation for automated reasoning systems. In this paper, we trace their limitations back to two historical developments in AI: the motivation to establish automated theorem-provers for systems of mathematical logic, and the formulation of nonmonotonic systems of reasoning. We then describe why human reasoning cannot be simulated by current machine reasoning or deep learning methodologies. People can generate inferences on their own instead of just evaluating them. They use strategies and fallible shortcuts when they reason. The discovery of an inconsistency does not result in an explosion of inferences—instead, it often prompts reasoners to abandon a premise. And the connectives they use in natural language have different meanings than those in classical logic. Only recently have cognitive scientists begun to implement automated reasoning systems that reflect these human patterns of reasoning. A key constraint of these recent implementations is that they compute, not proofs or truth values, but possibilities.