Loading…
A Q-learning based multi-strategy integrated artificial bee colony algorithm with application in unmanned vehicle path planning
Artificial bee colony (ABC) is a prominent algorithm that offers great exploration capabilities among various meta-heuristic algorithms. However, its monotonous and one-dimensional search strategy limits its searching performance in the solving process. Thus, to address this issue, a Q-learning base...
Saved in:
Published in: | Expert systems with applications 2024-02, Vol.236, p.121303, Article 121303 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Artificial bee colony (ABC) is a prominent algorithm that offers great exploration capabilities among various meta-heuristic algorithms. However, its monotonous and one-dimensional search strategy limits its searching performance in the solving process. Thus, to address this issue, a Q-learning based multi-strategy integrated ABC algorithm (QMABC) is proposed. In the QMABC, multiple search strategies are proposed to utilize different individual experiences and search approaches for solution updates. Then, Q-learning is employed for strategy selection. In comparison to previous studies, this paper introduces more effective state and action configurations within the framework of Q-learning. To evaluate the performance of the QMABC, CEC 2017 benchmark functions are adopted to compare it to different meta-heuristic algorithms including ABC based and non-ABC based algorithms. Moreover, applications in path planning are implemented to further verify the effectiveness of the QMABC. Overall, it should be highlighted that the proposed QMABC demonstrates superiority in both numerical and practical experiments.
•A Q-learning based multi-strategy integrated ABC algorithm is proposed.•Multiple strategies are introduced to the onlooker bee phase.•Q-learning is utilized to adaptively determine the most suitable strategy.•Novel state and action settings are designed in Q-learning framework.•Proposed algorithm shows superiority in numerical and practical experiments. |
---|---|
ISSN: | 0957-4174 1873-6793 |
DOI: | 10.1016/j.eswa.2023.121303 |