Online Exams in the Era of ChatGPT
dc.contributor.author | Buchmann, Erik | |
dc.contributor.author | Thor, Andreas | |
dc.contributor.editor | Röpke, René | |
dc.contributor.editor | Schroeder, Ulrik | |
dc.date.accessioned | 2023-08-30T09:09:41Z | |
dc.date.available | 2023-08-30T09:09:41Z | |
dc.date.issued | 2023 | |
dc.description.abstract | Recent versions of ChatGPT demonstrate an amazing ability to answer difficult questions in natural languages on a wide range of topics. This puts homeworks or online exams at risk, where a student can simply forward a question to the chatbot and copy its answers. We have tested ChatGPT with three of our exams, to find out which kinds of exam questions are still difficult for a generative AI. Therefore, we categorized exam questions according to a knowledge taxonomy, and we analyze the wrong answers in each category. To our surprise, ChatGPT even performed well with procedural knowledge, and it earned a grade of 2.7 (B-) in the IT Security exam. However, we also observed five options to formulate questions that ChatGPT struggles with. | en |
dc.identifier.doi | 10.18420/delfi2023-15 | |
dc.identifier.isbn | 978-3-88579-732-6 | |
dc.identifier.pissn | 1617-5468 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/42240 | |
dc.language.iso | en | |
dc.publisher | Gesellschaft für Informatik e.V. | |
dc.relation.ispartof | 21. Fachtagung Bildungstechnologien (DELFI) | |
dc.relation.ispartofseries | Lecture Notes in Informatics (LNI) - Proceedings, Volume P-322 | |
dc.subject | Online Exams | |
dc.subject | ChatGPT | |
dc.title | Online Exams in the Era of ChatGPT | en |
dc.type | Text/Conference Paper | |
gi.citation.endPage | 84 | |
gi.citation.publisherPlace | Bonn | |
gi.citation.startPage | 79 | |
gi.conference.date | 11.-13. September 2023 | |
gi.conference.location | Aachen | |
gi.conference.review | full | |
gi.conference.sessiontitle | E-Assessment und Feedback |
Dateien
Originalbündel
1 - 1 von 1