Buchmann, ErikThor, AndreasRöpke, RenéSchroeder, Ulrik2023-08-302023-08-302023978-3-88579-732-6https://dl.gi.de/handle/20.500.12116/42240Recent versions of ChatGPT demonstrate an amazing ability to answer difficult questions in natural languages on a wide range of topics. This puts homeworks or online exams at risk, where a student can simply forward a question to the chatbot and copy its answers. We have tested ChatGPT with three of our exams, to find out which kinds of exam questions are still difficult for a generative AI. Therefore, we categorized exam questions according to a knowledge taxonomy, and we analyze the wrong answers in each category. To our surprise, ChatGPT even performed well with procedural knowledge, and it earned a grade of 2.7 (B-) in the IT Security exam. However, we also observed five options to formulate questions that ChatGPT struggles with.enOnline ExamsChatGPTOnline Exams in the Era of ChatGPTText/Conference Paper10.18420/delfi2023-151617-5468