Loading…
Reinforcement learning for test case prioritization based on LLEed K-means clustering and dynamic priority factor
Integrating reinforcement learning (RL) into test case prioritization (TCP) aims to cope with the dynamic nature and time constraints of continuous integration (CI) testing. However, achieving optimal ranking across CI cycles is challenging if the RL agent starts from an unfavorable initial environm...
Saved in:
Published in: | Information and software technology 2025-03, Vol.179, p.107654, Article 107654 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Integrating reinforcement learning (RL) into test case prioritization (TCP) aims to cope with the dynamic nature and time constraints of continuous integration (CI) testing. However, achieving optimal ranking across CI cycles is challenging if the RL agent starts from an unfavorable initial environment and deals with a dynamic environment characterized by continuous errors during learning. To mitigate the influence of adverse environments, this work proposes an approach to Test Case Prioritization which incorporates Locally Linear Embedding-based K-means Clustering and Dynamic Priority Factor into Reinforcement Learning (TCP-KDRL). Firstly, we exploit the K-means clustering method with Locally Linear Embedding (LLE) to mine the relationships between test cases, followed by assigning initial priority factors to the test cases. These test cases are ranked based on their initial factors, providing an improved initial learning environment for the agent in RL. Secondly, with the agent learning the ranking strategy in various cycles, we design a comprehensive reward indicator by considering running discrepancy and the position between test cases. Additionally, based on the reward values, the dynamic priority factors for the ranked test cases in each learning round of RL are adaptively updated and the sequence is locally fine-tuned. The fine-tuning strategy provides ample feedback to the agent and enables real-time correction of the erroneous ranking environment, enhancing the generalization of RL across various cycles. Finally, the experimental results demonstrate that TCP-KDRL, as an enhanced RL-based TCP method, outperforms other competitive TCP approaches. Specifically, incorporating the reward indicator and the fine-tuning strategy components, the results are significantly better than that of combining any other two components. For instance, in 12 projects, the average improvements are 0.1548 in APFD and 0.0793 in NRPA. Compared to other TCP methods, the proposed method achieves notable enhancement, with an increase of 0.6902 in APFD and 0.3816 in NRPA. |
---|---|
ISSN: | 0950-5849 |
DOI: | 10.1016/j.infsof.2024.107654 |