Regli, ChristophAnnighoefer, BjörnMichael, JudithPfeiffer, JérômeWortmann, Andreas2022-02-212022-02-212022https://dl.gi.de/handle/20.500.12116/38362AI algorithms promise solutions for situations where conventional, rule-based algorithms reach their limits. They perform in complex problems yet unknown at design time, and highly efficient functions can be implemented without having to develop a precise algorithm for the problem at hand. Well-tried applications show the AI’s ability to learn from new data, extrapolate on unseen data, and adapt to a changing environment — a situation encountered in fl ight operations. In aviation, however, certifi cation regulations impede the implementation of non-deterministic or probabilistic algorithms that adapt their behaviour with increasing experience. Regulatory initiatives aim at defining new development standards in a bottom-up approach, where the suitability and the integrity of the training data shall be addressed during the development process, increasing trustworthiness in eff ect. Methods to establish explainability and traceability of decisions made by AI algorithms are still under development, intending to reach the required level of trustworthiness. This paper outlines an approach to an independent, anthropomorphic software assurance for AI/ML systems as an additional layer of trustworthiness, encompassing top-down black-box testing while relying on a well-established regulatory framework.enAIartificial intelligenceMLmachine learningaviationAI pilotavionicscockpitcertifi cationlicencingtrusttrustworthinessblack-box testingindependent software assurancepost- market monitoringpilot trainingflight instructorpilot checkingflight examineranthropomorphismdehumanizationAn Anthropomorphic Approach to establish an Additional Layer of Trustworthiness of an AI PilotText/Conference Paper10.18420/se2022-ws-17