Loading…

A reinforcement learning based approach for a multiple-load carrier scheduling problem

This paper studies the problem of scheduling a multiple-load carrier which is used to deliver parts to line-side buffers of a general assembly (GA) line. In order to maximize the reward of the GA line, both the throughput of the GA line and the material handling distance are considered as scheduling...

Full description

Saved in:
Bibliographic Details
Published in:Journal of intelligent manufacturing 2015-12, Vol.26 (6), p.1233-1245
Main Authors: Chen, Ci, Xia, Beixin, Zhou, Bing-hai, Xi, Lifeng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:This paper studies the problem of scheduling a multiple-load carrier which is used to deliver parts to line-side buffers of a general assembly (GA) line. In order to maximize the reward of the GA line, both the throughput of the GA line and the material handling distance are considered as scheduling criteria. After formulating the scheduling problem as a reinforcement learning (RL) problem by defining state features, actions and the reward function, we develop a Q( λ ) RL algorithm based scheduling approach. To improve performance, forecasted information such as quantities of parts required in a look-ahead horizon is used when we define state features and actions in formulation. Other than applying traditional material handling request generating policy, we use a look-ahead based request generating policy with which material handling requests are generated based not only on current buffer information but also on future part requirement information. Moreover, by utilizing a heuristic dispatching algorithm, the approach is able to handle future requests as well as existing ones. To evaluate the performance of the approach, we conduct simulation experiments to compare the proposed approach with other approaches. Numerical results demonstrate that the policies obtained by the RL approach outperform other approaches.
ISSN:0956-5515
1572-8145
DOI:10.1007/s10845-013-0852-9