Auflistung nach Autor:in "Holzinger, Andreas"
1 - 10 von 13
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragBiometrical Signatures in Practice: A challenge for improving Human-Computer Interaction in Clinical Workflows(Mensch und Computer 2006: Mensch und Computer im Strukturwandel, 2006) Holzinger, Andreas; Geierhofer, Regina; Searle, GigExperience shows that within the sensitive area of documentation in a clinical centre, there are problems involved in acquiring the end user’s acceptance of authentication solutions based on smart cards, passwords, finger prints or digital signatures. The individual signature is still the most accepted method of certifying medical documents in hospitals and other health care sectors. This paper presents some solutions to the problem of using the biometric signature for certifying Diagnostic Finding Reports (DFRs) within a traditional clinical workflow. For this purpose, the authors implemented a framework application in C# and .NET on a Tablet-PC. A usability study was carried out to gain insight into the acceptance and usability of the biometric signature. Good end user acceptance and usability can only be obtained by providing simple operation (good user guidance), very short response times and, where possible, low rejection rates. In order to make an application successful, a clear increase in value must be immediately apparent to the end user.
- KonferenzbeitragDesign and development of a LO -editor for the virtual medical campus Graz(DeLFI 2003, Tagungsband der 1. e-Learning Fachtagung Informatik, 16.-18. September 2003 in Garching bei München., 2003) Holzinger, Andreas; Wascher, Ingemar; Steinmann, ChristianAt the medical faculty of Graz University a new curriculum has been developed for the studies of human medicine. This totally new approach is based on a Module-/Track-model and follows the basic principles of interdisciplinary, topic centered and patient oriented instruction. The Virtual Medical Campus general objective is the realization of an Information System to make the curriculum digitally accessible. The learning objects (LO) used within this campus are developed on the basis of Learning Object Metadata (LOM) for “trans-national education” as a basis for international networking. For easy-to-use manipulation of the metadata, a LOM-Editor is a necessity. Although there are some LOM-Editors available at present, we decided to develop an own Editor, due to two reasons: On the one hand we defined an LO as only being complete if pre-knowledge questions and self-evaluation questions are also included; thus we no longer handle only metadata with our editor but the whole LO. On the other hand, due to the fact that our users are medical teachers and students with low computing experience, we committed ourselves to the method of User Centered Design (UCD). Therefore we developed a totally new editor and speak of an LO-Editor instead of a LOM- Editor. This paper describes the successful development of our LO-Editor from scratch under some external constraints and time restrictions. The development implies some interesting approaches. The LO-Editor is based on the new Microsoft .NET technology but it is not implemented as a web service. It is a server based application split up into two parts, the front end and the middleware. To achieve the most possible platform independence, the front end consists of simple active server pages (ASP) using dynamic HTML (DHTML) and handles the user interaction. The metadata standard is implemented into the middleware by mapping the standard into a class hierarchy. Using the built-in de-serialization and serialization functions of .NET a metadata description (XML-file) can be depicted as an object tree and the object tree may again be saved as a metadata description file. Thus a database is not needed to handle the metadata standard which induces a faster application.
- ZeitschriftenartikelExplainable AI (ex-AI)(Informatik Spektrum: Vol. 41, No. 2, 2018) Holzinger, Andreas,,Explainable AI“ ist kein neues Gebiet. Vielmehr ist das Problem der Erklärbarkeit so alt wie die AI selbst, ja vielmehr das Resultat ihrer selbst. Während regelbasierte Lösungen der frühen AI nachvollziehbare ,,Glass-Box“-Ansätze darstellten, lag deren Schwäche im Umgang mit Unsicherheiten der realen Welt. Durch die Einführung probabilistischer Modellierung und statistischer Lernmethoden wurden die Anwendungen zunehmend erfolgreicher – aber immer komplexer und opak. Beispielsweise werden Wörter natürlicher Sprache auf hochdimensionale Vektoren abgebildet und dadurch für Menschen nicht mehr verstehbar. In Zukunft werden kontextadaptive Verfahren notwendig werden, die eine Verknüpfung zwischen statistischen Lernmethoden und großen Wissensrepräsentationen (Ontologien) herstellen und Nachvollziehbarkeit, Verständlichkeit und Erklärbarkeit erlauben – dem Ziel von ,,explainable AI“.
- ZeitschriftenartikelExplainable AI and Multi-Modal Causability in Medicine(i-com: Vol. 19, No. 3, 2021) Holzinger, AndreasProgress in statistical machine learning made AI in medicine successful, in certain classification tasks even beyond human level performance. Nevertheless, correlation is not causation and successful models are often complex “black-boxes”, which make it hard to understand why a result has been achieved. The explainable AI (xAI) community develops methods, e. g. to highlight which input parameters are relevant for a result; however, in the medical domain there is a need for causability: In the same way that usability encompasses measurements for the quality of use, causability encompasses measurements for the quality of explanations produced by xAI. The key for future human-AI interfaces is to map explainability with causability and to allow a domain expert to ask questions to understand why an AI came up with a result, and also to ask “what-if” questions (counterfactuals) to gain insight into the underlying independent explanatory factors of a result. A multi-modal causability is important in the medical domain because often different modalities contribute to a result.
- ZeitschriftenartikelGenerating Explanations for Conceptual Validation of Graph Neural Networks: An Investigation of Symbolic Predicates Learned on Relevance-Ranked Sub-Graphs(KI - Künstliche Intelligenz: Vol. 36, No. 0, 2022) Finzel, Bettina; Saranti, Anna; Angerschmid, Alessa; Tafler, David; Pfeifer, Bastian; Holzinger, AndreasGraph Neural Networks (GNN) show good performance in relational data classification. However, their contribution to concept learning and the validation of their output from an application domain’s and user’s perspective have not been thoroughly studied. We argue that combining symbolic learning methods, such as Inductive Logic Programming (ILP), with statistical machine learning methods, especially GNNs, is an essential forward-looking step to perform powerful and validatable relational concept learning. In this contribution, we introduce a benchmark for the conceptual validation of GNN classification outputs. It consists of the symbolic representations of symmetric and non-symmetric figures that are taken from a well-known Kandinsky Pattern data set. We further provide a novel validation framework that can be used to generate comprehensible explanations with ILP on top of the relevance output of GNN explainers and human-expected relevance for concepts learned by GNNs. Our experiments conducted on our benchmark data set demonstrate that it is possible to extract symbolic concepts from the most relevant explanations that are representative of what a GNN has learned. Our findings open up a variety of avenues for future research on validatable explanations for GNNs.
- TextdokumentHCI-Ausbildung in Österreich – ein Überblick(Tagungsband UP05, 2005) Thurnher, Bettina; Weidmann, Karl-Heinz; Kempter, Guido; Holzinger, Andreas; Andrews, Keith; Tscheligi, Manfred; Erharter, Dorothea; Stary, Christian; Purgathofer, Peter; Baumann, Konrad; Nischelwitzer, Alexander K.
- ZeitschriftenartikelInteractive Machine Learning (iML)(Informatik-Spektrum: Vol. 39, No. 1, 2016) Holzinger, AndreasWährend Machine Learning (ML) in vielen Domänen sehr gut funktioniert, wie die Leistung selbstfahrender Autos zeigt, bergen vollautomatisierte ML-Methoden in komplexen Domänen die Gefahr der Modellierung von Artefakten. Ein Beispiel für eine komplexe Domäne ist die Biomedizin, wo wir mit hochdimensionalen, probabilistischen und unvollständigen Datenmengen konfrontiert sind. In solchen Problemstellungen kann es vorteilhaft sein, nicht auf menschliches Domänenwissen zu verzichten, sondern vielmehr menschliche Intelligenz und ML zu kombinieren.
- ZeitschriftenartikelMeasuring the Quality of Explanations: The System Causability Scale (SCS)(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Holzinger, Andreas; Carrington, André; Müller, HeimoRecent success in Artificial Intelligence (AI) and Machine Learning (ML) allow problem solving automatically without any human intervention. Autonomous approaches can be very convenient. However, in certain domains, e.g., in the medical domain, it is necessary to enable a domain expert to understand, why an algorithm came up with a certain result. Consequently, the field of Explainable AI (xAI) rapidly gained interest worldwide in various domains, particularly in medicine. Explainable AI studies transparency and traceability of opaque AI/ML and there are already a huge variety of methods. For example with layer-wise relevance propagation relevant parts of inputs to, and representations in, a neural network which caused a result, can be highlighted. This is a first important step to ensure that end users, e.g., medical professionals, assume responsibility for decision making with AI/ML and of interest to professionals and regulators. Interactive ML adds the component of human expertise to AI/ML processes by enabling them to re-enact and retrace AI/ML results, e.g. let them check it for plausibility. This requires new human–AI interfaces for explainable AI. In order to build effective and efficient interactive human–AI interfaces we have to deal with the question of how to evaluate the quality of explanations given by an explainable AI system. In this paper we introduce our System Causability Scale to measure the quality of explanations. It is based on our notion of Causability (Holzinger et al. in Wiley Interdiscip Rev Data Min Knowl Discov 9(4), 2019) combined with concepts adapted from a widely-accepted usability scale.
- KonferenzbeitragMobile Anwendungen für Medizin und Gesundheit(Mensch & Computer 2012 – Workshopband: interaktiv informiert – allgegenwärtig und allumfassend!?, 2012) Novak, Jasminko; Ziegler, Jürgen; Hoppe, Ulrich; Holzinger, Andreas; Heintze, Christoph; Böckle, MartinDas Ziel des Workshops ist es innovative Anwendungen mobiler Technologien in der Medizin vorzustellen und zu diskutieren. Dies umfasst sowohl die „klassischen“ Bereiche der Optimierung von Krankenhausabläufen oder der mobilen Unterstützung der elektronischen Patientenakte als auch die neuen Einsatzmöglichkeiten, die sich mit der neuen Generation mobiler Geräte eröffnen (Tablets, SmartPhones, SmartPens).
- ZeitschriftenartikelSemantische Informationsextraktion in medizinischen Informationssystemen(Informatik-Spektrum: Vol. 30, No. 2, 2007) Holzinger, Andreas; Geierhofer, Regina; Errath, MaximilianDieser Artikel beschreibt einige Erfahrungen und typische Problemstellungen mit Textmining in der Medizin und gibt einen Einblick in aktuelle und zukünftige Herausforderungen in Forschung & Entwicklung. Interessant ist nämlich, dass auch im ,,Multimedia-Zeitalter“ die meiste Information immer noch als ,,Text“ vorliegt. Mithilfe von statistischen und linguistischen Verfahren wird mit sogenannter ,,Textmining-Software“ versucht, aus Freitexten Information ,,heraus zu schürfen“ (deshalb ,,Textmining“). Allerdings ist es damit noch nicht genug. Der nächste Schritt besteht darin, die Information sowohl nutzbar als auch brauchbar zu machen. Die jeweiligen End-Benutzerinnen und End-Benutzer müssen in die Lage versetzt werden, auf der Basis der gewonnenen Information deren Wissen zu erweitern. In unserem konkreten Fall sollen damit Entscheidungen im Rahmen ärztlichen Handelns unterstützt werden. Problemlösungen in diesem Bereich erfordern eine holistische Sicht- und Herangehensweise. Daher wird es immer wichtiger, Erkenntnisse aus Informatik und Psychologie zusammenfließen zu lassen und auf systemischer Ebene technologisch umzusetzen.