Loading…

Machine learning‐based test oracles for performance testing of cyber‐physical systems: An industrial case study on elevators dispatching algorithms

The software of systems of elevators needs constant maintenance to deal with new functionality, bug fixes, or legislation changes. To automatically validate the software of these systems, a typical approach in industry is to use regression oracles, which execute test inputs both in the software vers...

Full description

Saved in:
Bibliographic Details
Published in:Journal of software : evolution and process 2022-11, Vol.34 (11), p.n/a
Main Authors: Gartziandia, Aitor, Arrieta, Aitor, Ayerdi, Jon, Illarramendi, Miren, Agirre, Aitor, Sagardui, Goiuria, Arratibel, Maite
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The software of systems of elevators needs constant maintenance to deal with new functionality, bug fixes, or legislation changes. To automatically validate the software of these systems, a typical approach in industry is to use regression oracles, which execute test inputs both in the software version under test and in a previous software version. However, these practices require a long test execution time and cannot be re‐used at different test phases. To deal with these issues, we propose Dispatching AlgoRIthm Oracle (DARIO), a test oracle that relies on regression machine‐learning algorithms to detect both functional and non‐functional problems of the system. The machine‐learning algorithms of this oracle are trained by using data from previously tested versions to predict reference functional and non‐functional performance values of the new versions. An empirical evaluation with an industrial case study demonstrates the feasibility of using our approach. A total of five regression learning algorithms were validated by using mutation testing techniques. For the context of functional bugs, the accuracy when predicting verdicts by DARIO ranged between 95% and 98%, across the different scenarios proposed. For the context of non‐functional bugs, were competitive too, having an accuracy when predicting verdicts by DARIO ranged between 83% and 87%. In this paper we demonstrate that training machine‐learning algorithms by using data from previously tested versions of software to predict reference values of the new versions is a valid approach to create test oracles. Five regression learning algorithms were evaluated to predict functional and non‐functional performance of dispatching algorithms in different scenarios and the use of real test data for training has been proven to be more convenient than theoretical data for this purpose.
ISSN:2047-7473
2047-7481
DOI:10.1002/smr.2465