Autor*innen mit den meisten Dokumenten
Neueste Veröffentlichungen
- ZeitschriftenartikelPolynomial Algorithms for Computing a Single Preferred Assertional-Based Repair(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Telli, Abdelmoutia; Benferhat, Salem; Bourahla, Mustapha; Bouraoui, Zied; Tabia, KarimThis paper investigates different approaches for handling inconsistent DL-Lite knowledge bases in the case where the assertional base is prioritized and inconsistent with the terminological base. The inconsistency problem often happens when the assertions are provided by multiple conflicting sources having different reliability levels. We propose different inference strategies based on the selection of one consistent assertional base, called a preferred repair. For each strategy, a polynomial algorithm for computing the associated single preferred repair is proposed. Selecting a unique repair is important since it allows an efficient handling of queries. We provide experimental studies showing (from a computational point of view) the benefits of selecting one repair when reasoning under inconsistency in lightweight knowledge bases.
- ZeitschriftenartikelDFG Research Unit (Forschergruppe) FOR 1513 Hybrid Reasoning for Intelligent Systems(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Lakemeyer, Gerhard
- ZeitschriftenartikelSpecial Issue on Challenges for Reasoning under Uncertainty, Inconsistency, Vagueness, and Preferences(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Kern-Isberner, Gabriele; Lukasiewicz, Thomas
- ZeitschriftenartikelA Practical Comparison of Qualitative Inferences with Preferred Ranking Models(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Beierle, Christoph; Eichhorn, Christian; Kutsch, StevenWhen reasoning qualitatively from a conditional knowledge base, two established approaches are system Z and p-entailment. The latter infers skeptically over all ranking models of the knowledge base, while system Z uses the unique pareto-minimal ranking model for the inference relations. Between these two extremes of using all or just one ranking model, the approach of c-representations generates a subset of all ranking models with certain constraints. Recent work shows that skeptical inference over all c-representations of a knowledge base includes and extends p-entailment. In this paper, we follow the idea of using preferred models of the knowledge base instead of the set of all models as a base for the inference relation. We employ different minimality constraints for c-representations and demonstrate inference relations from sets of preferred c-representations with respect to these constraints. We present a practical tool for automatic c-inference that is based on a high-level, declarative constraint-logic programming approach. Using our implementation, we illustrate that different minimality constraints lead to inference relations that differ mutually as well as from system Z and p-entailment.
- ZeitschriftenartikelNews(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017)
- ZeitschriftenartikelQuantitative Methods for Similarity in Description Logics(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Ecke, AndreasDescription logics (DLs) are a family of logic-based knowledge representation languages used to describe the knowledge of an application domain and reason about it in a formally well-defined way. However, all classical DLs have in common that they can only express exact knowledge, and correspondingly only allow exact inferences. In practice though, knowledge is rarely exact. Many definitions have exceptions or are vaguely formulated in the first place, and people might not only be interested in exact answers, but also in alternatives that are “close enough”. We are interested in tackling how to express that something is “close enough”, and how to integrate this notion into the formalism of DLs. To this end we employ the notion of similarity and dissimilarity measures, we will look at how useful measures can be defined in the context of DLs and two particular applications: Relaxed instance queries will use a similarity measure in order to not just give the exact answer to some query, but all answers that are reasonably similar. Prototypical definitions on the other hand use a measure of dissimilarity or distance between concepts in order to allow the definitions of and reasoning with concepts that capture not just those individuals that satisfy exactly the stated properties, but also those that are “close enough”.
- ZeitschriftenartikelOn the Compliance of Rationality Postulates for Inconsistency Measures: A More or Less Complete Picture(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Thimm, MatthiasAn inconsistency measure is a function mapping a knowledge base to a non-negative real number, where larger values indicate the presence of more significant inconsistencies in the knowledge base. In order to assess the quality of a particular inconsistency measure, a wide range of rationality postulates has been proposed in the literature. In this paper, we survey 15 recent approaches to inconsistency measurement and provide a comparative analysis on their compliance with 18 rationality postulates. In doing so, we fill the gaps in previous partial investigations and provide new insights into the adequacy of certain measures and the significance of certain postulates.
- ZeitschriftenartikelQuo vadis, AI?(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Visser, Ubbo
- ZeitschriftenartikelMany Facets of Reasoning Under Uncertainty, Inconsistency, Vagueness, and Preferences: A Brief Survey(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Kern-Isberner, Gabriele; Lukasiewicz, ThomasIn this paper, we give an introduction to reasoning under uncertainty, inconsistency, vagueness, and preferences in artificial intelligence (AI), including some historic notes and a brief survey to previous approaches.
- ZeitschriftenartikelBDI Logics for BDI Architectures: Old Problems, New Perspectives(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Herzig, Andreas; Lorini, Emiliano; Perrussel, Laurent; Xiao, ZhanhaoThe mental attitudes of belief, desire, and intention play a central role in the design and implementation of autonomous agents. In 1987, Bratman proposed their integration into a belief–desire–intention (BDI) theory that was seminal in AI. Since then numerous approaches were built on the BDI paradigm, both practical (BDI architectures and BDI agents) and formal (BDI logics). The logical approaches that were most influential are due to Cohen and Levesque and to Rao and Georgeff. However, three fundamental problems remain up to now. First, the practical and the formal approaches evolved separately and neither fertilised the other. Second, only few formal approaches addressed some important issues such as the revision of intentions or the fundamentally paraconsistent nature of desires, and it seems fair to say that there is currently no consensical, comprehensive logical account of intentions. Finally, only few publications study the interaction between intention and other concepts that are naturally connected to intention, such as actions, planning, and the revision of beliefs and intentions. Our paper summarizes the state of the art, discusses the main open problems, and sketches how they can be addressed. We argue in particular that research on intention should be better connected to fields such as reasoning about actions, automated planning, and belief revision and update.