Auflistung nach Autor:in "Gotlieb, Arnaud"
1 - 3 von 3
Treffer pro Seite
Sortieroptionen
- ConferencePaperLearning to Generate Fault-revealing Test Cases in Metamorphic Testing(Software Engineering 2021, 2021) Spieker, Helge; Gotlieb, ArnaudMetamorphic Testing is a software testing paradigm which aims at using necessary properties of a system under test, called metamorphic relations (MR), to either check its expected outputs, or to generate new test cases. Metamorphic Testing has been successful to test programs for which a full oracle is unavailable or to test programs with uncertainties on expected outputs such as learning systems. In this paper, we formulate the effective selection of MRs as a reinforcement learning problem, based on contextual bandits. Our method Adaptive Metamorphic Testing sequentially selects a MR that is expected to provide the highest payoff, i.e., that is most likely to reveal faults. Which MRs are likely to reveal faults is learned from successive exploration trials. The bandit explores the available MRs and evaluates the fault landscape of the system under test, thereby providing valuable information to the tester. We present experimental results over two applications in machine learning, namely image classification and object detection, where Adaptive Metamorphic Testing efficiently identifies weaknesses of the tested systems. The original paper "Adaptive Metamorphic Testing with Contextual Bandits" first appeared in the Journal of Systems and Software (2020).
- KonferenzbeitragReinforcement Learning for Automatic Test Case Prioritization and Selection in Continuous Integration(Software Engineering und Software Management 2018, 2018) Spieker, Helge; Gotlieb, Arnaud; Marijan, Dusica; Mossige, MortenThe paper appeared at the International Symposium on Software Testing and Analysis (ISSTA 2017). It is part of a project on test case prioritization, selection, and execution in Continuous Integration (CI). Selecting the most promising test cases to detect bugs is hard if there are uncertainties on the impact of committed code changes or if traceability links between code and tests are not available. This paper introduces Retecs, a new method for automatically learning test case selection and prioritization in CI with the goal to minimize the round-trip time between code commits and developer feedback on failed test cases. Retecs uses reinforcement learning to select and prioritize test cases according to their duration, previous last execution and failure history. In a constantly changing environment, where new test cases are created and obsolete test cases are deleted, the Retecs method learns to prioritize error-prone test cases higher under the guidance of a reward function and by observing previous CI cycles. By application on three industrial case studies, we show for the first time that reinforcement learning enables fruitful automatic adaptive test case selection and prioritization in CI and regression testing.
- KonferenzbeitragTime-aware Test Execution Scheduling for Cyber-Physical Systems(Software Engineering and Software Management 2019, 2019) Mossige, Morten; Gotlieb, Arnaud; Spieker, Helge; Meling, Hein; Carlsson, MatsThe paper "Time-aware Test Execution Scheduling for Cyber-Physical Systems"first appeared in the application track of the 23rd International Conference on Principles and Practice of Constraint Programming (CP 2017). Testing cyber-physical systems involves the execution of test cases on target-machines equipped with the latest release of a software control system. When testing industrial robots, it is common that the target machines need to share some common resources, e.g., costly hardware devices, and so there is a need to schedule test case execution on the target machines, accounting for these shared resources. With a large number of such tests executed on a regular basis, this scheduling becomes difficult to manage manually. In fact, with manual test execution planning and scheduling, some robots may remain unoccupied for long periods of time and some test cases may not be executed. We introduce TC-Sched, a time-aware method for automated test case execution scheduling. TC-Sched uses Constraint Programming to schedule tests to run on multiple machines constrained by the tests’ access to shared resources, such as measurement or networking devices. We will further discuss challenges and requirements encountered when automating testing for industrial robots.