Rott, JakobNiedermayr, RainerJürgens, ElmarKelter, Udo2024-07-262024-07-2620200720-8928https://dl.gi.de/handle/20.500.12116/44153In software projects with growing functionality, the number of tests increases fast which results in long execution times for the whole test suite. As a consequence, it is not possible to always execute the whole test suite after each commit so that feedback time to developers increases. With long test feedback times, the effort for an early fix rises and developers can be hindered in productive work. One solution to reduce feedback times is test case prioritization. Although test prioritization strategies have been extensively studied, they are rarely used in practice and their benefits are widely unknown. In this paper, we present a benchmark framework to evaluate the benefits of different test prioritization algorithms on open source projects and primarily use the time until the first failure (TUFF) as relevant metric. We conducted an empirical study with 31 open-source projects hosted on GitHub, using data of 437 builds that failed on the CI server. In 75% of the builds, the first test will fail within the first 18% of the total test suite’s duration.entest case prioritizationbenchmarkframeworkempirical studycontinuous integrationBetter Feedback Times Using Test Case Prioritization? Mining Data of Past Build Failures in an Automated BenchmarkText/Conference Paper