Textdokument
Exploring Adversarial Transferability in Real-World Scenarios: Understanding and Mitigating Security Risks
Vorschaubild nicht verfügbar
Volltext URI
Dokumententyp
Text
Dateien
Zusatzinformation
Datum
2023
Autor:innen
Zeitschriftentitel
ISSN der Zeitschrift
Bandtitel
Verlag
Gesellschaft für Informatik e.V.
Zusammenfassung
Deep Neural Networks (DNNs) are known to be vulnerable to artificially generated samples known as adversarial examples. Such adversarial samples aim at generating misclassifications by specifically optimizing input data for a matching perturbation. Interestingly, it can be observed that these adversarial examples are transferable from the source network where they were created to a black-box target network. The transferability property means that attackers are no longer required to have white-box access to models nor bound to query the target model repeatedly to craft an effective attack. Given the rising popularity of the use of DNNs in various domains, it is crucial to understand the vulnerability of these networks to such attacks. In this premise, the thesis intends to study transferability under a more realistic scenario, where source and target models can differ in various aspects like accuracy, capacity, bitwidth, and architecture among others. Furthermore, the goal is to also investigate defensive strategies that can be utilized to minimize the effectiveness of these attacks.