Logo des Repositoriums
 

Mutual Explanations for Cooperative Decision Making in Medicine

dc.contributor.authorSchmid, Ute
dc.contributor.authorFinzel, Bettina
dc.date.accessioned2021-04-23T09:34:06Z
dc.date.available2021-04-23T09:34:06Z
dc.date.issued2020
dc.description.abstractExploiting mutual explanations for interactive learning is presented as part of an interdisciplinary research project on transparent machine learning for medical decision support. Focus of the project is to combine deep learning black box approaches with interpretable machine learning for classification of different types of medical images to combine the predictive accuracy of deep learning and the transparency and comprehensibility of interpretable models. Specifically, we present an extension of the Inductive Logic Programming system Aleph to allow for interactive learning. Medical experts can ask for verbal explanations. They can correct classification decisions and in addition can also correct the explanations. Thereby, expert knowledge can be taken into account in form of constraints for model adaption.de
dc.identifier.doi10.1007/s13218-020-00633-2
dc.identifier.pissn1610-1987
dc.identifier.urihttp://dx.doi.org/10.1007/s13218-020-00633-2
dc.identifier.urihttps://dl.gi.de/handle/20.500.12116/36284
dc.publisherSpringer
dc.relation.ispartofKI - Künstliche Intelligenz: Vol. 34, No. 2
dc.relation.ispartofseriesKI - Künstliche Intelligenz
dc.subjectExplanations as constraints
dc.subjectHuman-AI partnership
dc.subjectInductive Logic Programming
dc.titleMutual Explanations for Cooperative Decision Making in Medicinede
dc.typeText/Journal Article
gi.citation.endPage233
gi.citation.startPage227

Dateien