Reers, VolkerMaußner, MarcKlein, MaikeKrupka, DanielWinter, CorneliaGergeleit, MartinMartin, Ludger2024-10-212024-10-212024978-3-88579-746-32944-7682https://dl.gi.de/handle/20.500.12116/45204Machine learning has made some remarkable breakthroughs in recent years. It has entered many sectors of the economy and everyday topics and in some cases has led to significant disruptions. The emergence of quantum computing is expected to lead to further significant increases in the performance of machine learning – regarding speed up of the training process and expressivity of the resulting models. However, as with all technologies, both classical and quantum machine learning are associated with new risks and attack vectors. This paper conducts a thorough examination of the vulnerabilities exhibited by classical and quantum machine learning models. Through a review of pertinent literature, we examine the vulnerability of classical models to attacks such as adversarial examples, evasion attacks, and poisoning attacks. Concurrently, we delve into the emerging realm of quantum machine learning, analyzing the unique properties of quantum systems and their implications for security in machine learning applications. Our comparative analysis offers insights into the robustness, scalability, and computational complexity of classical and quantum models under different attack scenarios. Furthermore, we discuss potential defense mechanisms and mitigation strategies to enhance the resilience of both classical and quantum machine learning frameworks against adversarial attacks.enQuantum Machine LearningCyber SecurityAdversarial AttacksComparative Analysis of Vulnerabilities in Classical and Quantum Machine LearningText/Conference Paper10.18420/inf2024_441617-54682944-7682