Brocker, AnnabellSchroeder, UlrikSchulz, SandraKiesler, Natalie2024-09-032024-09-032024https://dl.gi.de/handle/20.500.12116/44524Timely and effective feedback is essential for novice programmers. Despite significant advancements in Large Language Models enabling the generation of understandable feedback in CS1 courses, determining appropriate timing for delivering feedback automatically remains a persistent challenge. Compiler messages serve as a fundamental communication channel between programmers and computers, signaling syntactic and runtime errors. Various metrics associated with these messages could potentially signify the need for intervention. Hence, it is imperative to explore the correlations among these error metrics. This study conducts a comprehensive analysis of multiple public Python datasets to evaluate error metrics and explore their correlations. The findings offer insights into a potential indicator for determining appropriate timing of feedback in programming education.enProgrammingError QuotientWatwinRobuste RelativeRepeated Error DensityPythonCorrelation of Error Metrics in Python CS1 CoursesText/Conference paper10.18420/delfi2024_40