Logo des Repositoriums
 
Zeitschriftenartikel

Evaluating Architectural Safeguards for Uncertain AI Black-Box Components

Vorschaubild nicht verfügbar

Volltext URI

Dokumententyp

Text/Journal Article

Zusatzinformation

Datum

2024

Autor:innen

Zeitschriftentitel

ISSN der Zeitschrift

Bandtitel

Verlag

Gesellschaft für Informatik e.V.

Zusammenfassung

There have been enormous achievements in the field of Artificial Intelligence (AI) which has attracted a lot of attention. Their unverifiable nature, however, makes them inherently unreliable. For example, there are various reports of incidents in which incorrect predictions of AI components led to serious system malfunctions (some even ended fatally). As a result, various architectural approaches (referred to as Architectural Safeguards) have been developed to deal with the unreliable and uncertain nature of AI. Software engineers are now facing the challenge to select the architectural safeguard that satisfies the non-functional requirements (e.g. reliability) best. However, it is crucial to resolve such design decisions as early as possible to avoid (i) changes after the system has been deployed (and thus potentially high costs) and to meet the rigorous quality requirements of safety-critical systems where AI is more commonly used. This dissertation presents a model-based approach that supports software engineers in the development of AI-enabled systems by enabling the evaluation of architectural safeguards. More specifically, an approach for reliability prediction of AI-enabled systems (based on established model-based techniques) is presented. Moreover, the approach is generalised to architectural safeguards with self-adaptive capabilities, i.e. self adaptive systems. The approach has been validated by considering four case studies. The results show that the approach not only makes it possible to analyse the impact of architectural safeguards on the overall reliability of an AI-enabled system, but also supports software engineers in their decision-making.

Beschreibung

Scheerer, Max (2024): Evaluating Architectural Safeguards for Uncertain AI Black-Box Components. Softwaretechnik-Trends Band 44, Heft 2. Gesellschaft für Informatik e.V.

Zitierform

DOI

Tags