Mainzer, KlausKlein, MaikeKrupka, DanielWinter, CorneliaGergeleit, MartinMartin, Ludger2024-10-212024-10-212024978-3-88579-746-3https://dl.gi.de/handle/20.500.12116/45213On 14 June 2023, a general AI regulation was adopted by the European Parliament on the proposal of the EU Commission. For the first time, rules on product safety and the protection of human rights were combined. The background for this legislative initiative is the fear of increasing unpredictability, opacity and lack of explainability of current machine learning, which is largely based on pattern recognition in large statistical data masses generated by modern computer capacities and data storage. However, we already know from elementary statistics that statistical correlations cannot provide causal explanations. Is AI becoming an inscrutable "black box" which threaten the sovereignty of human individuals? Therefore, a European research group has suggested „Living guidelines for responsible use of generative AI“ which concern the sovereignty of scientists, reviewers of scientific papers, scientific journals, and scientific organisations working with chatbots. These guidelines were supported and published last year by „Nature“ [Bo23]. The paper invited LLM developers and companies, researchers/reviewers/editors of scientific journals and publishers/research (funding) organisations to comment the criteria for an independent scientific auditing agency for LLMs. However, we should be cautious with our regulatory efforts so that we do not end up stifling Europe's innovation potential and falling behind in international competition.enmachine learninggenerative AIlearning algorithmlimits of AIChatGPTinnovationcertificationstandardisationliving guidelines of AIsovereigntyDigital Sovereignty with „Living Guidelines for Responsible Use of Generative AI“Text/Conference Paper10.18420/inf2024_521617-5468