Auflistung nach Autor:in "Panichella, Sebastiano"
1 - 2 von 2
Treffer pro Seite
Sortieroptionen
- TextdokumentHow Does Simulation-Based Testing for Self-Driving Cars Match Human Perception?(Software Engineering 2025, 2025) Birchler, Christian; Mohammed, Tanzil Kombarabettu; Rani, Pooja; Nechita, Teodora; Kehrer, Timo; Panichella, SebastianoSoftware metrics such as coverage or mutation scores have been investigated for the automated quality assessment of test suites. While traditional tools rely on software metrics, the field of self-driving cars (SDCs) has primarily focused on simulation-based test case generation using quality metrics such as the out-of-bound (OOB) parameter to determine if a test case fails or passes. However, it remains unclear to what extent this quality metric aligns with the human perception of the safety and realism of SDCs. To address this (reality) gap, we conducted an empirical study involving 50 participants to investigate the factors that determine how humans perceive SDC test cases as safe, unsafe, realistic, or unrealistic. To this aim, we developed a framework leveraging virtual reality (VR) technologies, called SDC-Alabaster, to immerse the study participants into the virtual environment of SDC simulators. Our findings indicate that the human assessment of safety and realism of failing/passing test cases can vary based on different factors, such as the test’s complexity and the possibility of interacting with the SDC. Especially for the assessment of realism, the participants’ age leads to a different perception. This study highlights the need for more research on simulation testing quality metrics and the importance of human perception in evaluating SDCs.
- TextdokumentSingle and Multi-objective Test Cases Prioritization for Self-driving Cars in Virtual Environments(Software Engineering 2025, 2025) Birchler, Christian; Khatiri, Sajad; Derakhshanfar, Pouria; Panichella, Sebastiano; Panichella, AnnibaleIn this work, we propose an approach to prioritize simulation-based test cases for self-driving cars. The paper is published in ACM Transactions on Software Engineering and Methodology [Bi23]. Testing with simulation environments helps to identify critical failing scenarios for self-driving cars (SDCs). Simulation-based tests are safer than in-field operational tests and allow detecting software defects before deployment. However, these tests are very expensive and are too many to be run frequently within limited time constraints. In this paper, we investigate test case prioritization techniques to increase the ability to detect SDC regression faults with virtual tests earlier. Our approach, called SDC-Prioritizer, prioritizes virtual tests for SDCs according to static features of the roads we designed to be used within the driving scenarios. These features can be collected without running the tests, which means that they do not require past execution results. We introduce two evolutionary approaches to prioritize the test cases using diversity metrics (black-box heuristics) computed on these static features. These two approaches, called SO-SDC-Prioritizer and MO-SDC-Prioritizer, use single-objective and multi-objective genetic algorithms, respectively, to find trade-offs between executing the less expensive tests and the most diverse test cases earlier. Our empirical study conducted in the SDC domain shows that MO-SDC-Prioritizer significantly (p-value<= 0.1e − 10) improves the ability to detect safety-critical failures at the same level of execution time compared to baselines: random and greedy-based test case orderings. Besides, our study indicates that multi-objective meta-heuristics outperform single-objective approaches when prioritizing simulation-based tests for SDCs. MO-SDC-Prioritizer prioritizes test cases with a large improvement in fault detection while its overhead (up to 0.45% of the test execution cost) is negligible.