Markert,ThoraLanger,FabianDanos,VasiliosDemmler, DanielKrupka, DanielFederrath, Hannes2022-09-282022-09-282022978-3-88579-720-3https://dl.gi.de/handle/20.500.12116/39480ML based AI applications are increasingly used in various fields and domains. Despite the enormous and promising capabilities of ML, the inherent lack of robustness, explainability and transparency limits the potential use cases of AI systems. In particular, within every safety or security critical area, such limitations require risk considerations and audits to be compliant with the prevailing safety and security demands. Unfortunately, existing standards and audit schemes do not completely cover the ML specific issues and lead to challenging or incomplete mapping of the ML functionality to the existing methodologies. Thus, we propose a generalized audit framework for ML based AI applications (GAFAI) as an anticipation and assistance to achieve auditability. This conceptual risk and requirement driven approach based on sets of generalized requirements and their corresponding application specific refinements as contributes to close the gaps in auditing AI.enAI AuditingAI CertificationTrustworthy AISecuritySafetyRobustnessInterpretabilityGAFAI: Proposal of a Generalized Audit Framework for AI10.18420/inf2022_1071617-5468