Sörries, PeterMüller-Birn, ClaudiaGlinka, KatrinBoenisch, FranziskaMargraf, MarianSayegh-Jodehl, SabineRose, MatthiasWienrich, CarolinWintersberger, PhilippWeyers, Benjamin2021-09-232021-09-232021https://dl.gi.de/handle/20.500.12116/37418The application of machine learning (ML) in the medical domain has recently received a lot of attention. However, the constantly growing need for data in such ML-based approaches raises many privacy concerns, particularly when data originate from vulnerable groups, for example, people with a rare disease. In this context, a challenging but promising approach is the design of privacy-preserving computation technologies (e.g. differential privacy). However, design guidance on how to implement such approaches in practice has been lacking. In our research, we explore these challenges in the design process by involving stakeholders from medicine, security, ML, and human-computer interaction, as well as patients themselves. We emphasize the suitability of reflective design in this context by considering the concept of privacy by design. Based on a real-world use case situated in the healthcare domain, we explore the existing privacy needs of our main stakeholders, i.e. medical researchers or physicians and patients. Stakeholder needs are illustrated within two scenarios that help us to reflect on contradictory privacy needs. This reflection process informs conceptional design rationales and our proposal for privacy-preserving explanation user interfaces. We propose that the latter support both patients’ privacy preferences for a meaningful data donation and experts’ understanding of the privacy-preserving computation technology employed.enPrivacy preservationmachine learninguser interfacereflective designconceptional design rationalesPrivacy Needs Reflection: Conceptional Design Rationales for Privacy-Preserving Explanation User InterfacesText/Workshop Paper10.18420/muc2021-mci-wsc-389