Textdokument
The Influence of Training Parameters on Neural Networks' Vulnerability to Membership Inference Attacks
Lade...
Volltext URI
Dokumententyp
Dateien
Zusatzinformation
Datum
2022
Autor:innen
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Quelle
Verlag
Gesellschaft für Informatik, Bonn
Zusammenfassung
With Machine Learning (ML) models being increasingly applied in sensitive domains, the related privacy concerns are rising. Neural networks (NN) are vulnerable to, so-called, membership inference attacks (MIA) which aim at determining whether a particular data sample was used for training the model. The factors that render NNs prone to this privacy attack are not yet fully understood. However, previous work suggests that the setup of the models and the training process might impact a model's risk to MIAs. To investigate these factors more in detail, we set out to experimentally evaluate the influence of the training choices in NNs on the models' vulnerability. Our analyses highlight that the batch size, the activation function, and the application and placement of batch normalization and dropout have the highest impact on the success of MIAs. Additionally, we applied statistical analyses to the experiment results and found a highly positive correlation between a model's ability to resist MIAs and its generalization capacity. We also defined a metric to measure the difference in the distributions of loss values between member and non-member data samples and observed that models scoring higher values on that metric were consistently more exposed to the attack. The latter observation was further confirmed by manually generating predictions for member and non-member samples producing loss values within specific distributions and launching MIAs on them.