Auflistung nach Autor:in "Pfeuffer, Nicolas"
1 - 5 von 5
Treffer pro Seite
Sortieroptionen
- Textdokument3. Wissenschaftsforum: Digitale Transformation (WiFo21) - Komplettband(3. Wissenschaftsforum: Digitale Transformation (WiFo21), 2021) Ahrend, Klaus-Michael; Stille, Wolfgang; Goltz, Katharina; Sandkuhl, Kurt; Schnell, Oliver; Ziehmann, Janek; Diekmann, Julian; Eggert, Mathias; Dreyer, Jana; Zimmermann, Birgit; Preußer, Sven; Müller, Holger; Hossnofsky, Verena; Junge, Sebastian; Graf-Vlachy, Lorenz; Reiche, Finn; Badura, Andrea; Jacobs, Stephan; Seidl, Sonja; Burkart, Marco A.; Pfeuffer, Nicolas
- ZeitschriftenartikelAnthropomorphic Information Systems(Business & Information Systems Engineering: Vol. 61, No. 4, 2019) Pfeuffer, Nicolas; Benlian, Alexander; Gimpel, Henner; Hinz, Oliver
- TextdokumentDesign Principles for (X)AI-based Patient Education Systems(3. Wissenschaftsforum: Digitale Transformation (WiFo21), 2021) Pfeuffer, NicolasRecently, the management of chronic diseases has advanced to a prime topic for Information Systems (IS) research and practice. With increasing capability of Information Technology, patients are empowered to engage in self-management of chronic diseases connected to promises of health benefits for the individual as well as an unburdening of clinics and economic advantages for health care systems. Nevertheless, patients must be adequately educated about risks, screening and examination options to make patient self-management effective, sustainable and profitable. In this regard, Explainable Artificial Intelligence ((X)AI)-based Patient Education Systems (PES) may be an opportunity to provide patient education in an interactive, intelligible and intelligent manner. By establishing Design Principles (DP) for the engineering of effective (X)AIbased PES, instantiating them in a system prototype and evaluating the DP with the help of general practitioners, this paper contributes to the body of knowledge in designing health IS.
- ZeitschriftenartikelExplanatory Interactive Machine Learning(Business & Information Systems Engineering: Vol. 65, No. 6, 2023) Pfeuffer, Nicolas; Baum, Lorenz; Stammer, Wolfgang; Abdel-Karim, Benjamin M.; Schramowski, Patrick; Bucher, Andreas M.; Hügel, Christian; Rohde, Gernot; Kersting, Kristian; Hinz, OliverThe most promising standard machine learning methods can deliver highly accurate classification results, often outperforming standard white-box methods. However, it is hardly possible for humans to fully understand the rationale behind the black-box results, and thus, these powerful methods hamper the creation of new knowledge on the part of humans and the broader acceptance of this technology. Explainable Artificial Intelligence attempts to overcome this problem by making the results more interpretable, while Interactive Machine Learning integrates humans into the process of insight discovery. The paper builds on recent successes in combining these two cutting-edge technologies and proposes how Explanatory Interactive Machine Learning (XIL) is embedded in a generalizable Action Design Research (ADR) process – called XIL-ADR. This approach can be used to analyze data, inspect models, and iteratively improve them. The paper shows the application of this process using the diagnosis of viral pneumonia, e.g., Covid-19, as an illustrative example. By these means, the paper also illustrates how XIL-ADR can help identify shortcomings of standard machine learning projects, gain new insights on the part of the human user, and thereby can help to unlock the full potential of AI-based systems for organizations and research.
- ZeitschriftenartikelHow and What Can Humans Learn from Being in the Loop?(KI - Künstliche Intelligenz: Vol. 34, No. 2, 2020) Abdel-Karim, Benjamin M.; Pfeuffer, Nicolas; Rohde, Gernot; Hinz, OliverThis article discusses the counterpart of interactive machine learning, i.e., human learning while being in the loop in a human-machine collaboration. For such cases we propose the use of a Contradiction Matrix to assess the overlap and the contradictions of human and machine predictions. We show in a small-scaled user study with experts in the area of pneumology (1) that machine-learning based systems can classify X-rays with respect to diseases with a meaningful accuracy, (2) humans partly use contradictions to reconsider their initial diagnosis, and (3) that this leads to a higher overlap between human and machine diagnoses at the end of the collaboration situation. We argue that disclosure of information on diagnosis uncertainty can be beneficial to make the human expert reconsider her or his initial assessment which may ultimately result in a deliberate agreement. In the light of the observations from our project, it becomes apparent that collaborative learning in such a human-in-the-loop scenario could lead to mutual benefits for both human learning and interactive machine learning. Bearing the differences in reasoning and learning processes of humans and intelligent systems in mind, we argue that interdisciplinary research teams have the best chances at tackling this undertaking and generating valuable insights.