Loading…

Deep Reinforcement Learning Approach for Capacitated Supply Chain optimization under Demand Uncertainty

With the global trade competition becoming further intensified, Supply Chain Management (SCM) technology has become critical to maintain competitive advantages for enterprises. However, the economic integration and increased market uncertainty have brought great challenges to SCM. In this paper, two...

Full description

Saved in:
Bibliographic Details
Main Authors: Peng, Zedong, Zhang, Yi, Feng, Yiping, Zhang, Tuchao, Wu, Zhengguang, Su, Hongye
Format: Conference Proceeding
Language:English
Subjects:
Citations: Items that cite this one
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by cdi_FETCH-LOGICAL-c255t-b558849680210815de5c341e42423830bb65281161e8eefce4d170c8f27bd5523
cites
container_end_page 3517
container_issue
container_start_page 3512
container_title
container_volume
creator Peng, Zedong
Zhang, Yi
Feng, Yiping
Zhang, Tuchao
Wu, Zhengguang
Su, Hongye
description With the global trade competition becoming further intensified, Supply Chain Management (SCM) technology has become critical to maintain competitive advantages for enterprises. However, the economic integration and increased market uncertainty have brought great challenges to SCM. In this paper, two Deep Reinforcement Learning (DRL) based methods are proposed to solve multi-period capacitated supply chain optimization problem under demand uncertainty. The capacity constraints are satisfied from both modelling perspective and DRL algorithm perspective. Both continuous action space and discrete action space are considered. The performance of the methods is analyzed through the simulation of three different cases. Compared to the baseline of (r, Q) policy, the proposed methods show promising results for the supply chain optimization problem.
doi_str_mv 10.1109/CAC48633.2019.8997498
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_8997498</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8997498</ieee_id><sourcerecordid>8997498</sourcerecordid><originalsourceid>FETCH-LOGICAL-c255t-b558849680210815de5c341e42423830bb65281161e8eefce4d170c8f27bd5523</originalsourceid><addsrcrecordid>eNotkMtqwzAURNVCoWmaLygF_YBTPe2rpXH6gkChbdZBlq8TlVgWirJwv76BZjWLOQycIeSRsyXnzDw1daOglHIpGDdLMKZSBq7IHa8EcMWMktdkJkqAghkJt2RxPP4wxoTkSis2I7sVYqSf6EM_JocDhkzXaFPwYUfrGNNo3Z6eO9rYaJ3PNmNHv04xHiba7K0PdIzZD_7XZj8GegodJrrCwYaOboLDlM9Mnu7JTW8PR1xcck42L8_fzVux_nh9b-p14YTWuWi1BlCmBCY4A6471E4qjkooIUGyti31WYyXHAGxd6g6XjEHvajaTmsh5-Thf9cj4jYmP9g0bS-_yD_sR1f7</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Deep Reinforcement Learning Approach for Capacitated Supply Chain optimization under Demand Uncertainty</title><source>IEEE Xplore All Conference Series</source><creator>Peng, Zedong ; Zhang, Yi ; Feng, Yiping ; Zhang, Tuchao ; Wu, Zhengguang ; Su, Hongye</creator><creatorcontrib>Peng, Zedong ; Zhang, Yi ; Feng, Yiping ; Zhang, Tuchao ; Wu, Zhengguang ; Su, Hongye</creatorcontrib><description>With the global trade competition becoming further intensified, Supply Chain Management (SCM) technology has become critical to maintain competitive advantages for enterprises. However, the economic integration and increased market uncertainty have brought great challenges to SCM. In this paper, two Deep Reinforcement Learning (DRL) based methods are proposed to solve multi-period capacitated supply chain optimization problem under demand uncertainty. The capacity constraints are satisfied from both modelling perspective and DRL algorithm perspective. Both continuous action space and discrete action space are considered. The performance of the methods is analyzed through the simulation of three different cases. Compared to the baseline of (r, Q) policy, the proposed methods show promising results for the supply chain optimization problem.</description><identifier>EISSN: 2688-0938</identifier><identifier>EISBN: 1728140943</identifier><identifier>EISBN: 9781728140940</identifier><identifier>DOI: 10.1109/CAC48633.2019.8997498</identifier><language>eng</language><publisher>IEEE</publisher><subject>deep reinforcement learning ; demand uncertainty ; Dynamic scheduling ; Machine learning ; Mathematical model ; Optimization ; supply chain optimization ; Supply chains ; Uncertainty ; vanilla policy gradient</subject><ispartof>2019 Chinese Automation Congress (CAC), 2019, p.3512-3517</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c255t-b558849680210815de5c341e42423830bb65281161e8eefce4d170c8f27bd5523</citedby></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8997498$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,23930,23931,25140,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8997498$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Peng, Zedong</creatorcontrib><creatorcontrib>Zhang, Yi</creatorcontrib><creatorcontrib>Feng, Yiping</creatorcontrib><creatorcontrib>Zhang, Tuchao</creatorcontrib><creatorcontrib>Wu, Zhengguang</creatorcontrib><creatorcontrib>Su, Hongye</creatorcontrib><title>Deep Reinforcement Learning Approach for Capacitated Supply Chain optimization under Demand Uncertainty</title><title>2019 Chinese Automation Congress (CAC)</title><addtitle>CAC</addtitle><description>With the global trade competition becoming further intensified, Supply Chain Management (SCM) technology has become critical to maintain competitive advantages for enterprises. However, the economic integration and increased market uncertainty have brought great challenges to SCM. In this paper, two Deep Reinforcement Learning (DRL) based methods are proposed to solve multi-period capacitated supply chain optimization problem under demand uncertainty. The capacity constraints are satisfied from both modelling perspective and DRL algorithm perspective. Both continuous action space and discrete action space are considered. The performance of the methods is analyzed through the simulation of three different cases. Compared to the baseline of (r, Q) policy, the proposed methods show promising results for the supply chain optimization problem.</description><subject>deep reinforcement learning</subject><subject>demand uncertainty</subject><subject>Dynamic scheduling</subject><subject>Machine learning</subject><subject>Mathematical model</subject><subject>Optimization</subject><subject>supply chain optimization</subject><subject>Supply chains</subject><subject>Uncertainty</subject><subject>vanilla policy gradient</subject><issn>2688-0938</issn><isbn>1728140943</isbn><isbn>9781728140940</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2019</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotkMtqwzAURNVCoWmaLygF_YBTPe2rpXH6gkChbdZBlq8TlVgWirJwv76BZjWLOQycIeSRsyXnzDw1daOglHIpGDdLMKZSBq7IHa8EcMWMktdkJkqAghkJt2RxPP4wxoTkSis2I7sVYqSf6EM_JocDhkzXaFPwYUfrGNNo3Z6eO9rYaJ3PNmNHv04xHiba7K0PdIzZD_7XZj8GegodJrrCwYaOboLDlM9Mnu7JTW8PR1xcck42L8_fzVux_nh9b-p14YTWuWi1BlCmBCY4A6471E4qjkooIUGyti31WYyXHAGxd6g6XjEHvajaTmsh5-Thf9cj4jYmP9g0bS-_yD_sR1f7</recordid><startdate>201911</startdate><enddate>201911</enddate><creator>Peng, Zedong</creator><creator>Zhang, Yi</creator><creator>Feng, Yiping</creator><creator>Zhang, Tuchao</creator><creator>Wu, Zhengguang</creator><creator>Su, Hongye</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>201911</creationdate><title>Deep Reinforcement Learning Approach for Capacitated Supply Chain optimization under Demand Uncertainty</title><author>Peng, Zedong ; Zhang, Yi ; Feng, Yiping ; Zhang, Tuchao ; Wu, Zhengguang ; Su, Hongye</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c255t-b558849680210815de5c341e42423830bb65281161e8eefce4d170c8f27bd5523</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2019</creationdate><topic>deep reinforcement learning</topic><topic>demand uncertainty</topic><topic>Dynamic scheduling</topic><topic>Machine learning</topic><topic>Mathematical model</topic><topic>Optimization</topic><topic>supply chain optimization</topic><topic>Supply chains</topic><topic>Uncertainty</topic><topic>vanilla policy gradient</topic><toplevel>online_resources</toplevel><creatorcontrib>Peng, Zedong</creatorcontrib><creatorcontrib>Zhang, Yi</creatorcontrib><creatorcontrib>Feng, Yiping</creatorcontrib><creatorcontrib>Zhang, Tuchao</creatorcontrib><creatorcontrib>Wu, Zhengguang</creatorcontrib><creatorcontrib>Su, Hongye</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Peng, Zedong</au><au>Zhang, Yi</au><au>Feng, Yiping</au><au>Zhang, Tuchao</au><au>Wu, Zhengguang</au><au>Su, Hongye</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Deep Reinforcement Learning Approach for Capacitated Supply Chain optimization under Demand Uncertainty</atitle><btitle>2019 Chinese Automation Congress (CAC)</btitle><stitle>CAC</stitle><date>2019-11</date><risdate>2019</risdate><spage>3512</spage><epage>3517</epage><pages>3512-3517</pages><eissn>2688-0938</eissn><eisbn>1728140943</eisbn><eisbn>9781728140940</eisbn><abstract>With the global trade competition becoming further intensified, Supply Chain Management (SCM) technology has become critical to maintain competitive advantages for enterprises. However, the economic integration and increased market uncertainty have brought great challenges to SCM. In this paper, two Deep Reinforcement Learning (DRL) based methods are proposed to solve multi-period capacitated supply chain optimization problem under demand uncertainty. The capacity constraints are satisfied from both modelling perspective and DRL algorithm perspective. Both continuous action space and discrete action space are considered. The performance of the methods is analyzed through the simulation of three different cases. Compared to the baseline of (r, Q) policy, the proposed methods show promising results for the supply chain optimization problem.</abstract><pub>IEEE</pub><doi>10.1109/CAC48633.2019.8997498</doi><tpages>6</tpages></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2688-0938
ispartof 2019 Chinese Automation Congress (CAC), 2019, p.3512-3517
issn 2688-0938
language eng
recordid cdi_ieee_primary_8997498
source IEEE Xplore All Conference Series
subjects deep reinforcement learning
demand uncertainty
Dynamic scheduling
Machine learning
Mathematical model
Optimization
supply chain optimization
Supply chains
Uncertainty
vanilla policy gradient
title Deep Reinforcement Learning Approach for Capacitated Supply Chain optimization under Demand Uncertainty
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-05T00%3A03%3A32IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Deep%20Reinforcement%20Learning%20Approach%20for%20Capacitated%20Supply%20Chain%20optimization%20under%20Demand%20Uncertainty&rft.btitle=2019%20Chinese%20Automation%20Congress%20(CAC)&rft.au=Peng,%20Zedong&rft.date=2019-11&rft.spage=3512&rft.epage=3517&rft.pages=3512-3517&rft.eissn=2688-0938&rft_id=info:doi/10.1109/CAC48633.2019.8997498&rft.eisbn=1728140943&rft.eisbn_list=9781728140940&rft_dat=%3Cieee_CHZPO%3E8997498%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c255t-b558849680210815de5c341e42423830bb65281161e8eefce4d170c8f27bd5523%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=8997498&rfr_iscdi=true