Better Feedback Times Using Test Case Prioritization? Mining Data of Past Build Failures in an Automated Benchmark
dc.contributor.author | Rott, Jakob | |
dc.contributor.author | Niedermayr, Rainer | |
dc.contributor.author | Jürgens, Elmar | |
dc.contributor.editor | Kelter, Udo | |
dc.date.accessioned | 2024-07-26T10:18:38Z | |
dc.date.available | 2024-07-26T10:18:38Z | |
dc.date.issued | 2020 | |
dc.description.abstract | In software projects with growing functionality, the number of tests increases fast which results in long execution times for the whole test suite. As a consequence, it is not possible to always execute the whole test suite after each commit so that feedback time to developers increases. With long test feedback times, the effort for an early fix rises and developers can be hindered in productive work. One solution to reduce feedback times is test case prioritization. Although test prioritization strategies have been extensively studied, they are rarely used in practice and their benefits are widely unknown. In this paper, we present a benchmark framework to evaluate the benefits of different test prioritization algorithms on open source projects and primarily use the time until the first failure (TUFF) as relevant metric. We conducted an empirical study with 31 open-source projects hosted on GitHub, using data of 437 builds that failed on the CI server. In 75% of the builds, the first test will fail within the first 18% of the total test suite’s duration. | en |
dc.identifier.issn | 0720-8928 | |
dc.identifier.uri | https://dl.gi.de/handle/20.500.12116/44153 | |
dc.language.iso | en | |
dc.pubPlace | Bonn | |
dc.publisher | Gesellschaft für Informatik e.V. | |
dc.relation.ispartof | Softwaretechnik-Trends Band 40, Heft 2 | |
dc.relation.ispartofseries | Softwaretechnik-Trends | |
dc.subject | test case prioritization | |
dc.subject | benchmark | |
dc.subject | framework | |
dc.subject | empirical study | |
dc.subject | continuous integration | |
dc.title | Better Feedback Times Using Test Case Prioritization? Mining Data of Past Build Failures in an Automated Benchmark | en |
dc.type | Text/Conference Paper | |
mci.conference.date | 16.-18. September 2020 | |
mci.conference.location | Paderborn | |
mci.conference.sessiontitle | 22. Workshop Software-Reengineering und -Evolution (WSRE) und 11. Workshop Design for Future (DFF) | |
mci.reference.pages | 3-4 |
Dateien
Originalbündel
1 - 1 von 1