Auflistung nach Autor:in "Boenisch, Franziska"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- AbstractApplying Differential Privacy to Machine Learning: Challenges and Potentials(crypto day matters 31, 2019) Boenisch, Franziska
- Konferenzbeitrag"I Never Thought About Securing My Machine Learning Systems": A Study of Security and Privacy Awareness of Machine Learning Practitioners(Mensch und Computer 2021 - Tagungsband, 2021) Boenisch, Franziska; Battis, Verena; Buchmann, Nicolas; Poikela, MaijaMachine learning (ML) models have become increasingly important components of many software systems. Therefore, ensuring their privacy and security is a crucial task. Current research mainly focuses on the development of security and privacy methods. However, ML practitioners, as the individuals in charge of translating the theory into practical applications, have not yet received much attention. In this paper, the security and privacy awareness and practices of ML practitioners are studied through an online survey with the aim of (1) gaining insight into the current state of awareness, (2) identifying influencing factors, and (3) exploring the actual use of existing methods and tools. The results indicate a relatively low general privacy and security awareness among the ML practitioners surveyed. In addition, they are less familiar with ML privacy protection methods than with general security methods or ML-related ones. Moreover, awareness correlates with the years of working with ML but not with the level of academic education or the field of occupation. Finally, the practitioners in this study seem to experience uncertainties in implementing legal frameworks, such as the European General Data Protection Regulation, into their ML workflows.
- WorkshopbeitragPrivacy Needs Reflection: Conceptional Design Rationales for Privacy-Preserving Explanation User Interfaces(Mensch und Computer 2021 - Workshopband, 2021) Sörries, Peter; Müller-Birn, Claudia; Glinka, Katrin; Boenisch, Franziska; Margraf, Marian; Sayegh-Jodehl, Sabine; Rose, MatthiasThe application of machine learning (ML) in the medical domain has recently received a lot of attention. However, the constantly growing need for data in such ML-based approaches raises many privacy concerns, particularly when data originate from vulnerable groups, for example, people with a rare disease. In this context, a challenging but promising approach is the design of privacy-preserving computation technologies (e.g. differential privacy). However, design guidance on how to implement such approaches in practice has been lacking. In our research, we explore these challenges in the design process by involving stakeholders from medicine, security, ML, and human-computer interaction, as well as patients themselves. We emphasize the suitability of reflective design in this context by considering the concept of privacy by design. Based on a real-world use case situated in the healthcare domain, we explore the existing privacy needs of our main stakeholders, i.e. medical researchers or physicians and patients. Stakeholder needs are illustrated within two scenarios that help us to reflect on contradictory privacy needs. This reflection process informs conceptional design rationales and our proposal for privacy-preserving explanation user interfaces. We propose that the latter support both patients’ privacy preferences for a meaningful data donation and experts’ understanding of the privacy-preserving computation technology employed.