Auflistung nach Schlagwort "Artificial Intelligence (AI)"
1 - 4 von 4
Treffer pro Seite
Sortieroptionen
- KonferenzbeitragCybersecurity Testing for Industry 4.0: Enhancing Deployments in operational I&C systems Through Adversarial Testing and Explainable AI(INFORMATIK 2024, 2024) Ndiaye, Ndeye Gagnessiry; Kirdan, Erkin; Waedt, KarlSeveral emerging technologies have substantially affected the scope and implementation of security testing. This includes the testing of cryptographic algorithm implementation, the security of Machine Learning (ML) and Artificial Intelligence (AI) algorithms, joint functional safety and security-related (IEC TR 63069) testing, security and privacy-related testing of big data and cloud computing, e.g. with regard to de-identification. This paper focuses on the security ML and AI implementations, examining their integration in industrial control and nuclear systems (IEC 62443). Special attention is given to security threats considered throughout the AI system life cycle specifically at design phase. We assess the entirety of the secure development lifecycle, which includes stages such as data and model management, risk assessment, and the enhancement of system robustness and resilience as specified by ISO/IEC 42001. To highlight the critical role of verification and validation (V&V), we conduct a proof-of-concept exploit targeted and gradual feature poisoning attack on a water treatment and distribution simulator fault detector. We achieve to demonstrate the impact of the attack on model robustness and performance through explainable metrics and pave the way for the development of a secure lifecycle framework, thereby increasing the chances of successful deployment.
- KonferenzbeitragMan vs. machine: A study comparing super-recognizers and artificial intelligence(INFORMATIK 2024, 2024) Lietsch, Maria; Preuß, Svenja; Becker, Sven; Labudde, DirkThis study addresses the limits of human and artificial intelligence (AI) in face recognition using a specially designed test, which consists of tasks regarding person identification and lookalike discrimination. It was divided into nine sets of four or five queries each. The assignments, presented in the study, were performed by ten super-recognizers from the Chemnitz police department (Saxony) as well as the AI systems “Face Recognition” and “GhostFaceNet”. The evaluation revealed considerable differences in the results of the individual super-recognizers (SR). Additionally, the comparison between human and artificial intelligence in particular revealed clear limitations of the AI in relation to the tasks set. To further evaluate the super-recognizers and AI systems, additional tests are planned, covering various topics such as the identification of siblings or the recognition of faces aged by AI.
- KonferenzbeitragPotential of Facebook’s artificial intelligence for marketing(42. GIL-Jahrestagung, Künstliche Intelligenz in der Agrar- und Ernährungswirtschaft, 2022) Janssen, MartinDue to the Corona pandemic and the age of digitization, online food platforms have become more and more important. Therefore, the trend to buy food online is increasing. Nevertheless, many direct sellers and especially conventional farmers are not familiar with selling their products online. Different barriers can affect the acceptance of selling food online. Artificial Intelligence (AI) can help to reduce barriers and fill the gap of missing know-how. This study uses Facebook’s AI for targeted marketing campaigns to find the potential audiences that consist of online food buyers based on significant results of a quantitative online survey (n=172). As a result, people with properties such as animal welfare proponents had a positive mood towards buying local food online.
- ZeitschriftenartikelVerbinden von Natürlicher und Künstlicher Intelligenz: eine experimentelle Testumgebung für Explainable AI (xAI)(HMD Praxis der Wirtschaftsinformatik: Vol. 57, No. 1, 2020) Holzinger, Andreas; Müller, HeimoKünstliche Intelligenz (KI) folgt dem Begriff der menschlichen Intelligenz, der leider kein klar definierter Begriff ist. Die gebräuchlichste Definition, wie sie in der Kognitionswissenschaft als mentale Fähigkeit gegeben ist, enthält unter anderem die Fähigkeit, abstrakt, logisch und schlussfolgernd zu denken und gegebene Probleme der realen Welt zu lösen. Ein aktuelles Thema in der KI ist es, herauszufinden, ob und inwieweit Algorithmen in der Lage sind, solches abstraktes Denken und Schlussfolgern ähnlich wie Menschen zu erlernen – oder ob das Lernergebnis auf rein statistischer Korrelation beruht. In diesem Beitrag stellen wir eine von uns entwickelte frei verfügbare, universelle und erweiterbare experimentelle Testumgebung vor. Diese „Kandinsky Patterns“ ( https://human-centered.ai/project/kandinsky-patterns , https://www.youtube.com/watch?v=UuiV0icAlRs ), benannt nach dem russischen Maler und Kunsttheoretiker Wassily Kandinsky (1866–1944), stellen eine Art „Schweizer Messer“ zum Studium der genannten Problemstellungen dar. Das Gebiet, dass diese Problemstellungen behandelt wird „explainable AI“ (xAI) genannt. Erklärbarkeit/Interpretierbarkeit hat das Ziel, menschlichen Experten zu ermöglichen, zugrundeliegende Erklärungsfaktoren – die Kausalität – zu verstehen, also warum eine KI-Entscheidung getroffen wurde, und so den Weg für eine transparente und verifizierbare KI zu ebnen. Artificial intelligence (AI) follows the concept of human intelligence, which unfortunately is not a clearly defined concept. The most common definition, as given in cognitive science as a mental ability, includes the ability to think abstract, logical and deductively and to solve given problems of the real world. A current topic in AI is to find out whether and to what extent algorithms are capable of learning such abstract “thinking” and reasoning similar to humans—or whether the learning outcome is based on purely statistical correlation. In this paper we present a freely available, universal and extensible experimental test environment. These “Kandinsky Patterns”, named after the Russian painter and art theorist Wassily Kandinsky (1866–1944), represent a kind of “Swiss knife” for studying the problems mentioned above. The area that deals with these problems is called “explainable AI” (xAI). Explainability/Interpretability aims to enable human experts to understand the underlying explanatory factors—causality—i.e. why an AI decision was made, thus paving the way for a transparent and verifiable AI.