Auflistung nach Autor:in "Thimm, Matthias"
1 - 5 von 5
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragEin erster Prototyp: Sicherheitsguide für Grundschulkinder beim Umgang mit dem Internet(Informatik 2014, 2014) Fruth, Jana; Thimm, Matthias; Kuhlmann, Sven; Dittmann, JanaBereits Grundschulkinder nutzen das Internet regelmäßig. Dabei sind sie häufig Sicherheitsgefahren ausgesetzt, mit denen sie nicht umgehen können. In diesem Beitrag wird ein Konzept und Prototyp eines softwarebasierten Sicherheitsguides vorgestellt, der 6 bis 10 jährige Grundschulkinder für potentielle Sicherheitsgefahren im Internet sensibilisieren und ihnen Handlungskompetenzen für Sicherheitsmechanismen vermitteln soll. Der Prototyp wurde mit einer Nutzerstudie in einer Grundschule mit einer eigenen Methodik evaluiert. Die vermuteten Lerneffekte sind allerdings in Zukunft noch mit weiteren Tests zu belegen.
- ZeitschriftenartikelOn the Compliance of Rationality Postulates for Inconsistency Measures: A More or Less Complete Picture(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Thimm, MatthiasAn inconsistency measure is a function mapping a knowledge base to a non-negative real number, where larger values indicate the presence of more significant inconsistencies in the knowledge base. In order to assess the quality of a particular inconsistency measure, a wide range of rationality postulates has been proposed in the literature. In this paper, we survey 15 recent approaches to inconsistency measurement and provide a comparative analysis on their compliance with 18 rationality postulates. In doing so, we fill the gaps in previous partial investigations and provide new insights into the adequacy of certain measures and the significance of certain postulates.
- ZeitschriftenartikelStrategic Argumentation in Multi-Agent Systems(KI - Künstliche Intelligenz: Vol. 28, No. 3, 2014) Thimm, MatthiasArgumentation-based negotiation describes the process of decision-making in multi-agent systems through the exchange of arguments. If agents only have partial knowledge about the subject of a dialogue strategic argumentation can be used to exploit weaknesses in the argumentation of other agents and thus to persuade other agents of a specific opinion and reach a certain outcome. This paper gives an overview of the field of strategic argumentation and surveys recent works and developments. We provide a general discussion of the problem of strategic argumentation in multi-agent settings and discuss approaches to strategic argumentation, in particular strategies based on opponent models.
- ZeitschriftenartikelThe Tweety Library Collection for Logical Aspects of Artificial Intelligence and Knowledge Representation(KI - Künstliche Intelligenz: Vol. 31, No. 1, 2017) Thimm, MatthiasTweety is a collection of Java libraries that provides a general interface layer for doing research in and working with different knowledge representation formalisms such as classical logics, conditional logics, probabilistic logics, and computational argumentation. It is designed in such a way that tasks like representing and reasoning with knowledge bases inside the programming environment are realizable in a common manner. Furthermore, Tweety contains libraries for dealing with agents, multi-agent systems, and dialog systems for agents, as well as belief revision, preference reasoning, preference aggregation, and action languages. A series of utility libraries that deal with e. g. mathematical optimization complement the collection.
- ZeitschriftenartikelTowards Understanding and Arguing with Classifiers: Recent Progress(Datenbank-Spektrum: Vol. 20, No. 2, 2020) Shao, Xiaoting; Rienstra, Tjitze; Thimm, Matthias; Kersting, KristianMachine learning and argumentation can potentially greatly benefit from each other. Combining deep classifiers with knowledge expressed in the form of rules and constraints allows one to leverage different forms of abstractions within argumentation mining. Argumentation for machine learning can yield argumentation-based learning methods where the machine and the user argue about the learned model with the common goal of providing results of maximum utility to the user. Unfortunately, both directions are currently rather challenging. For instance, combining deep neural models with logic typically only yields deterministic results, while combining probabilistic models with logic often results in intractable inference. Therefore, we review a novel deep but tractable model for conditional probability distributions that can harness the expressive power of universal function approximators such as neural networks while still maintaining a wide range of tractable inference routines. While this new model has shown appealing performance in classification tasks, humans cannot easily understand the reasons for its decision. Therefore, we also review our recent efforts on how to “argue” with deep models. On synthetic and real data we illustrate how “arguing” with a deep model about its explanations can actually help to revise the model, if it is right for the wrong reasons.