Show simple item record

dc.contributor.authorHolzinger, Andreas
dc.date.accessioned2021-01-17T20:04:29Z
dc.date.available2021-01-17T20:04:29Z
dc.date.issued2021
dc.identifier.issn2196-6826
dc.identifier.urihttp://dl.gi.de/handle/20.500.12116/34678
dc.description.abstractProgress in statistical machine learning made AI in medicine successful, in certain classification tasks even beyond human level performance. Nevertheless, correlation is not causation and successful models are often complex “black-boxes”, which make it hard to understand <em>why</em> a result has been achieved. The explainable AI (xAI) community develops methods, e. g. to highlight which input parameters are relevant for a result; however, in the medical domain there is a need for causability: In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations produced by xAI. The key for future human-AI interfaces is to map explainability with causability and to allow a domain expert to ask questions to understand why an AI came up with a result, and also to ask “what-if” questions (counterfactuals) to gain insight into the underlying <em>independent</em> explanatory factors of a result. A multi-modal causability is important in the medical domain because often different modalities contribute to a result.en
dc.language.isoen
dc.publisherDe Gruyter
dc.relation.ispartofi-com: Vol. 19, No. 3
dc.subjectexplainable AI
dc.subjectHuman-Centered AI
dc.subjectHuman-AI interfaces
dc.titleExplainable AI and Multi-Modal Causability in Medicineen
dc.typeText/Journal Article
dc.pubPlaceBerlin
mci.reference.pages171-179
dc.identifier.doi10.1515/icom-2020-0024


Files in this item

FilesSizeFormatView

There are no files associated with this item.

Show simple item record