Auflistung Künstliche Intelligenz 29(3) - August 2015 nach Autor:in "Kern-Isberner, Gabriele"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- ZeitschriftenartikelExtending and Completing Probabilistic Knowledge and Beliefs Without Bias(KI - Künstliche Intelligenz: Vol. 29, No. 3, 2015) Beierle, Christoph; Kern-Isberner, Gabriele; Finthammer, Marc; Potyka, NicoCombining logic with probability theory provides a solid ground for the representation of and the reasoning with uncertain knowledge. Given a set of probabilistic conditionals like “If A then B with probability x”, a crucial question is how to extend this explicit knowledge, thereby avoiding any unnecessary bias. The connection between such probabilistic reasoning and commonsense reasoning has been elaborated especially by Jeff Paris, advocating the principle of Maximum Entropy (MaxEnt). In this paper, we address the general concepts and ideas underlying MaxEnt and leading to it, illustrate the use of MaxEnt by reporting on an example application from the medical domain, and give a brief survey on recent approaches to extending the MaxEnt principle to first-order logic.
- ZeitschriftenartikelQualitative and Semi-Quantitative Inductive Reasoning with Conditionals(KI - Künstliche Intelligenz: Vol. 29, No. 3, 2015) Eichhorn, Christian; Kern-Isberner, GabrieleConditionals like “birds fly—if bird then fly” are crucial for commonsense reasoning. In this technical project report we show that conditional logics provide a powerful formal framework that helps understanding if-then sentences in a way that is much closer to human reasoning than classical logic and allows for high-quality reasoning methods. We describe methods that inductively generate models from conditional knowledge bases. For this, we use both qualitative (like preferential models) and semi-quantitative (like Spohn’s ranking functions) semantics. We show similarities and differences between the resulting inference relations with respect to formal properties. We further report on two graphical methods on top of the ranking approaches which allow to decompose the models into smaller, more feasible components and allow for local inferences.