Logo des Repositoriums
 
Textdokument

Exploring Adversarial Transferability in Real-World Scenarios: Understanding and Mitigating Security Risks

Lade...
Vorschaubild

Volltext URI

Dokumententyp

Text

Zusatzinformation

Datum

2023

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Verlag

Gesellschaft für Informatik e.V.

Zusammenfassung

Deep Neural Networks (DNNs) are known to be vulnerable to artificially generated samples known as adversarial examples. Such adversarial samples aim at generating misclassifications by specifically optimizing input data for a matching perturbation. Interestingly, it can be observed that these adversarial examples are transferable from the source network where they were created to a black-box target network. The transferability property means that attackers are no longer required to have white-box access to models nor bound to query the target model repeatedly to craft an effective attack. Given the rising popularity of the use of DNNs in various domains, it is crucial to understand the vulnerability of these networks to such attacks. In this premise, the thesis intends to study transferability under a more realistic scenario, where source and target models can differ in various aspects like accuracy, capacity, bitwidth, and architecture among others. Furthermore, the goal is to also investigate defensive strategies that can be utilized to minimize the effectiveness of these attacks.

Beschreibung

Shrestha, Abhishek (2023): Exploring Adversarial Transferability in Real-World Scenarios: Understanding and Mitigating Security Risks. DC@KI2023: Proceedings of Doctoral Consortium at KI 2023. DOI: 10.18420/ki2023-dc-11. Gesellschaft für Informatik e.V.. pp. 94-102. Doctoral Consortium at KI 2023. Berlin. 45195

Zitierform

Tags