Gotsch, CarolinPuchan, JörgMarkus Böhm, Jürgen Wunderlich2024-10-012024-10-012024978-3-88579-801-9https://dl.gi.de/handle/20.500.12116/44664The rapid advancement of Artificial Intelligence (AI) technologies has sparked a discussion within organisations, questioning whether regulatory frameworks like the European Artificial Intelligence Act (EU AI Act) pose obstacles or opportunities for innovation and growth. In response to this ongoing discourse, this paper introduces a comprehensive lifecycle model tailored for high-risk AI systems. On the basis of a literature review, state-of-the-art Machine Learning (ML)/AI and software development lifecycles were identified to establish a foundational framework. By examining the requirements of the AI Act, specific to high-risk AI systems, actionable steps were extracted and integrated into the lifecycle. The resulting framework was developed iteratively, incorporating adaptations from identified lifecycles and mapping essential compliance steps. Expert interviews provided valuable insights for refinement, leading to a universal and future-proof lifecycle. The proposed framework not only ensures compliance with regulatory standards but also fosters innovation and development in the AI landscape.enAI SystemsHarmonising innovation and governance: A lifecycle model for high-risk AI systems under the European AI ActText/Conference Paper10.18420/AKWI2024-0061617-5468