Eleks, MarianIhler JakobRebstadt, JonasKortum-Landwehr, HenrikThomas, OliverKlein, MaikeKrupka, DanielWinter, CorneliaGergeleit, MartinMartin, Ludger2024-10-212024-10-212024978-3-88579-746-32944-7682https://dl.gi.de/handle/20.500.12116/45177As Artificial Intelligence (AI) permeates most economic sectors, the discipline Privacy Preserving Machine Learning (PPML) gains increasing importance as a way to ensure appropriate handling of sensitive data in the machine learning process. Although PPML-methods stand to provide privacy protection in AI use cases, each one comes with a trade-off. Practitioners applying PPML-methods increasingly request an overview of the types and impacts of these trade-offs. To aid this gap in knowledge, this article applies design science research to collect trade-off dimensions and method impacts in an extensive literature review. It then evaluates the specific trade-offs with a focus group of experts and finally constructs an overview over PPML-methods and method combinations’ impact. The final trade-off dimensions are privacy, utility, effort, transparency, and fairness. Seven PPML-methods and their combinations are evaluated according to their impact in these dimensions, resulting in a vast collection of design knowledge and identified research gaps.enPrivacy Preserving Machine LearningTrade-offHybrid MethodsDesign SciencePrivacy, Utility, Effort, Transparency and Fairness: Identifying and Swaying Trade-offs in Privacy Preserving Machine Learning through Hybrid MethodsText/Conference Paper10.18420/inf2024_021617-54682944-7682