Loading…
Real-world ride-hailing vehicle repositioning using deep reinforcement learning
•A real-world deployed reinforcement learning-based algorithm for ride-hailing vehicle repositioning.•A practical framework incorporating offline learning and online decision-time planning.•Effective algorithmic designs for small-fleet and large-fleet scenarios. We present a new practical framework...
Saved in:
Published in: | Transportation research. Part C, Emerging technologies Emerging technologies, 2021-09, Vol.130, p.103289, Article 103289 |
---|---|
Main Authors: | , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | cdi_FETCH-LOGICAL-c297t-143d0825fc46bd6153453484710b126611f94fa80e27089ad5ffeb6c9dae3b093 |
---|---|
cites | cdi_FETCH-LOGICAL-c297t-143d0825fc46bd6153453484710b126611f94fa80e27089ad5ffeb6c9dae3b093 |
container_end_page | |
container_issue | |
container_start_page | 103289 |
container_title | Transportation research. Part C, Emerging technologies |
container_volume | 130 |
creator | Jiao, Yan Tang, Xiaocheng Qin, Zhiwei (Tony) Li, Shuaiji Zhang, Fan Zhu, Hongtu Ye, Jieping |
description | •A real-world deployed reinforcement learning-based algorithm for ride-hailing vehicle repositioning.•A practical framework incorporating offline learning and online decision-time planning.•Effective algorithmic designs for small-fleet and large-fleet scenarios.
We present a new practical framework based on deep reinforcement learning and decision-time planning for real-world vehicle repositioning on ride-hailing (a type of mobility-on-demand, MoD) platforms. Our approach learns the spatiotemporal state-value function using a batch training algorithm with deep value networks. The optimal repositioning action is generated on-demand through value-based policy search, which combines planning and bootstrapping with the value networks. For the large-fleet problems, we develop several algorithmic features that we incorporate into our framework and that we demonstrate to induce coordination among the algorithmically-guided vehicles. We benchmark our algorithm with baselines in a ride-hailing simulation environment to demonstrate its superiority in improving income efficiency measured by income-per-hour. We have also designed and run a real-world experiment program with regular drivers on a major ride-hailing platform. We have observed significantly positive results on key metrics comparing our method with experienced drivers who performed idle-time repositioning based on their own expertise. |
doi_str_mv | 10.1016/j.trc.2021.103289 |
format | article |
fullrecord | <record><control><sourceid>elsevier_cross</sourceid><recordid>TN_cdi_crossref_primary_10_1016_j_trc_2021_103289</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><els_id>S0968090X21003004</els_id><sourcerecordid>S0968090X21003004</sourcerecordid><originalsourceid>FETCH-LOGICAL-c297t-143d0825fc46bd6153453484710b126611f94fa80e27089ad5ffeb6c9dae3b093</originalsourceid><addsrcrecordid>eNp9kFFLwzAQx4MoOKcfwLd-gc67tE0TfJKhThgMRMG3kCYXl9G1I6kTv70t81k47vjf3f84fozdIiwQUNztFkO0Cw4cR11wqc7YDGWtcl5U6pzNQAmZg4KPS3aV0g4AUFX1jG1eybT5dx9bl8XgKN-a0IbuMzvSNtiWskiHPoUh9N3U_UpTdkSHcRA630dLe-qGrCUTp41rduFNm-jmr87Z-9Pj23KVrzfPL8uHdW65qoccy8KB5JW3pWicwKoox5BljdAgFwLRq9IbCcRrkMq4yntqhFXOUNGAKuYMT3dt7FOK5PUhhr2JPxpBT0T0To9E9EREn4iMnvuTh8bHjoGiTjZQZ8mFSHbQrg__uH8Bu59qCQ</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype></control><display><type>article</type><title>Real-world ride-hailing vehicle repositioning using deep reinforcement learning</title><source>ScienceDirect Journals</source><creator>Jiao, Yan ; Tang, Xiaocheng ; Qin, Zhiwei (Tony) ; Li, Shuaiji ; Zhang, Fan ; Zhu, Hongtu ; Ye, Jieping</creator><creatorcontrib>Jiao, Yan ; Tang, Xiaocheng ; Qin, Zhiwei (Tony) ; Li, Shuaiji ; Zhang, Fan ; Zhu, Hongtu ; Ye, Jieping</creatorcontrib><description>•A real-world deployed reinforcement learning-based algorithm for ride-hailing vehicle repositioning.•A practical framework incorporating offline learning and online decision-time planning.•Effective algorithmic designs for small-fleet and large-fleet scenarios.
We present a new practical framework based on deep reinforcement learning and decision-time planning for real-world vehicle repositioning on ride-hailing (a type of mobility-on-demand, MoD) platforms. Our approach learns the spatiotemporal state-value function using a batch training algorithm with deep value networks. The optimal repositioning action is generated on-demand through value-based policy search, which combines planning and bootstrapping with the value networks. For the large-fleet problems, we develop several algorithmic features that we incorporate into our framework and that we demonstrate to induce coordination among the algorithmically-guided vehicles. We benchmark our algorithm with baselines in a ride-hailing simulation environment to demonstrate its superiority in improving income efficiency measured by income-per-hour. We have also designed and run a real-world experiment program with regular drivers on a major ride-hailing platform. We have observed significantly positive results on key metrics comparing our method with experienced drivers who performed idle-time repositioning based on their own expertise.</description><identifier>ISSN: 0968-090X</identifier><identifier>EISSN: 1879-2359</identifier><identifier>DOI: 10.1016/j.trc.2021.103289</identifier><language>eng</language><publisher>Elsevier Ltd</publisher><subject>Deep reinforcement learning ; Ridesharing ; Vehicle repositioning</subject><ispartof>Transportation research. Part C, Emerging technologies, 2021-09, Vol.130, p.103289, Article 103289</ispartof><rights>2021 Elsevier Ltd</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><citedby>FETCH-LOGICAL-c297t-143d0825fc46bd6153453484710b126611f94fa80e27089ad5ffeb6c9dae3b093</citedby><cites>FETCH-LOGICAL-c297t-143d0825fc46bd6153453484710b126611f94fa80e27089ad5ffeb6c9dae3b093</cites></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><link.rule.ids>314,780,784,27924,27925</link.rule.ids></links><search><creatorcontrib>Jiao, Yan</creatorcontrib><creatorcontrib>Tang, Xiaocheng</creatorcontrib><creatorcontrib>Qin, Zhiwei (Tony)</creatorcontrib><creatorcontrib>Li, Shuaiji</creatorcontrib><creatorcontrib>Zhang, Fan</creatorcontrib><creatorcontrib>Zhu, Hongtu</creatorcontrib><creatorcontrib>Ye, Jieping</creatorcontrib><title>Real-world ride-hailing vehicle repositioning using deep reinforcement learning</title><title>Transportation research. Part C, Emerging technologies</title><description>•A real-world deployed reinforcement learning-based algorithm for ride-hailing vehicle repositioning.•A practical framework incorporating offline learning and online decision-time planning.•Effective algorithmic designs for small-fleet and large-fleet scenarios.
We present a new practical framework based on deep reinforcement learning and decision-time planning for real-world vehicle repositioning on ride-hailing (a type of mobility-on-demand, MoD) platforms. Our approach learns the spatiotemporal state-value function using a batch training algorithm with deep value networks. The optimal repositioning action is generated on-demand through value-based policy search, which combines planning and bootstrapping with the value networks. For the large-fleet problems, we develop several algorithmic features that we incorporate into our framework and that we demonstrate to induce coordination among the algorithmically-guided vehicles. We benchmark our algorithm with baselines in a ride-hailing simulation environment to demonstrate its superiority in improving income efficiency measured by income-per-hour. We have also designed and run a real-world experiment program with regular drivers on a major ride-hailing platform. We have observed significantly positive results on key metrics comparing our method with experienced drivers who performed idle-time repositioning based on their own expertise.</description><subject>Deep reinforcement learning</subject><subject>Ridesharing</subject><subject>Vehicle repositioning</subject><issn>0968-090X</issn><issn>1879-2359</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2021</creationdate><recordtype>article</recordtype><recordid>eNp9kFFLwzAQx4MoOKcfwLd-gc67tE0TfJKhThgMRMG3kCYXl9G1I6kTv70t81k47vjf3f84fozdIiwQUNztFkO0Cw4cR11wqc7YDGWtcl5U6pzNQAmZg4KPS3aV0g4AUFX1jG1eybT5dx9bl8XgKN-a0IbuMzvSNtiWskiHPoUh9N3U_UpTdkSHcRA630dLe-qGrCUTp41rduFNm-jmr87Z-9Pj23KVrzfPL8uHdW65qoccy8KB5JW3pWicwKoox5BljdAgFwLRq9IbCcRrkMq4yntqhFXOUNGAKuYMT3dt7FOK5PUhhr2JPxpBT0T0To9E9EREn4iMnvuTh8bHjoGiTjZQZ8mFSHbQrg__uH8Bu59qCQ</recordid><startdate>202109</startdate><enddate>202109</enddate><creator>Jiao, Yan</creator><creator>Tang, Xiaocheng</creator><creator>Qin, Zhiwei (Tony)</creator><creator>Li, Shuaiji</creator><creator>Zhang, Fan</creator><creator>Zhu, Hongtu</creator><creator>Ye, Jieping</creator><general>Elsevier Ltd</general><scope>AAYXX</scope><scope>CITATION</scope></search><sort><creationdate>202109</creationdate><title>Real-world ride-hailing vehicle repositioning using deep reinforcement learning</title><author>Jiao, Yan ; Tang, Xiaocheng ; Qin, Zhiwei (Tony) ; Li, Shuaiji ; Zhang, Fan ; Zhu, Hongtu ; Ye, Jieping</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c297t-143d0825fc46bd6153453484710b126611f94fa80e27089ad5ffeb6c9dae3b093</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2021</creationdate><topic>Deep reinforcement learning</topic><topic>Ridesharing</topic><topic>Vehicle repositioning</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Jiao, Yan</creatorcontrib><creatorcontrib>Tang, Xiaocheng</creatorcontrib><creatorcontrib>Qin, Zhiwei (Tony)</creatorcontrib><creatorcontrib>Li, Shuaiji</creatorcontrib><creatorcontrib>Zhang, Fan</creatorcontrib><creatorcontrib>Zhu, Hongtu</creatorcontrib><creatorcontrib>Ye, Jieping</creatorcontrib><collection>CrossRef</collection><jtitle>Transportation research. Part C, Emerging technologies</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Jiao, Yan</au><au>Tang, Xiaocheng</au><au>Qin, Zhiwei (Tony)</au><au>Li, Shuaiji</au><au>Zhang, Fan</au><au>Zhu, Hongtu</au><au>Ye, Jieping</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>Real-world ride-hailing vehicle repositioning using deep reinforcement learning</atitle><jtitle>Transportation research. Part C, Emerging technologies</jtitle><date>2021-09</date><risdate>2021</risdate><volume>130</volume><spage>103289</spage><pages>103289-</pages><artnum>103289</artnum><issn>0968-090X</issn><eissn>1879-2359</eissn><abstract>•A real-world deployed reinforcement learning-based algorithm for ride-hailing vehicle repositioning.•A practical framework incorporating offline learning and online decision-time planning.•Effective algorithmic designs for small-fleet and large-fleet scenarios.
We present a new practical framework based on deep reinforcement learning and decision-time planning for real-world vehicle repositioning on ride-hailing (a type of mobility-on-demand, MoD) platforms. Our approach learns the spatiotemporal state-value function using a batch training algorithm with deep value networks. The optimal repositioning action is generated on-demand through value-based policy search, which combines planning and bootstrapping with the value networks. For the large-fleet problems, we develop several algorithmic features that we incorporate into our framework and that we demonstrate to induce coordination among the algorithmically-guided vehicles. We benchmark our algorithm with baselines in a ride-hailing simulation environment to demonstrate its superiority in improving income efficiency measured by income-per-hour. We have also designed and run a real-world experiment program with regular drivers on a major ride-hailing platform. We have observed significantly positive results on key metrics comparing our method with experienced drivers who performed idle-time repositioning based on their own expertise.</abstract><pub>Elsevier Ltd</pub><doi>10.1016/j.trc.2021.103289</doi></addata></record> |
fulltext | fulltext |
identifier | ISSN: 0968-090X |
ispartof | Transportation research. Part C, Emerging technologies, 2021-09, Vol.130, p.103289, Article 103289 |
issn | 0968-090X 1879-2359 |
language | eng |
recordid | cdi_crossref_primary_10_1016_j_trc_2021_103289 |
source | ScienceDirect Journals |
subjects | Deep reinforcement learning Ridesharing Vehicle repositioning |
title | Real-world ride-hailing vehicle repositioning using deep reinforcement learning |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-27T05%3A45%3A48IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-elsevier_cross&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=Real-world%20ride-hailing%20vehicle%20repositioning%20using%20deep%20reinforcement%20learning&rft.jtitle=Transportation%20research.%20Part%20C,%20Emerging%20technologies&rft.au=Jiao,%20Yan&rft.date=2021-09&rft.volume=130&rft.spage=103289&rft.pages=103289-&rft.artnum=103289&rft.issn=0968-090X&rft.eissn=1879-2359&rft_id=info:doi/10.1016/j.trc.2021.103289&rft_dat=%3Celsevier_cross%3ES0968090X21003004%3C/elsevier_cross%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c297t-143d0825fc46bd6153453484710b126611f94fa80e27089ad5ffeb6c9dae3b093%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rfr_iscdi=true |