Loading…

Speeding up Routing Schedules on Aisle Graphs With Single Access

In this article, we study the orienteering aisle-graph single-access problem (OASP), a variant of the orienteering problem for a robot moving in a so-called single-access aisle graph , i.e., a graph consisting of a set of rows that can be accessed from one side only. Aisle graphs model, among others...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on robotics 2022-02, Vol.38 (1), p.433-447
Main Authors: Betti Sorbelli, Francesco, Carpin, Stefano, Coro, Federico, Das, Sajal K., Navarra, Alfredo, Pinotti, Cristina M.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In this article, we study the orienteering aisle-graph single-access problem (OASP), a variant of the orienteering problem for a robot moving in a so-called single-access aisle graph , i.e., a graph consisting of a set of rows that can be accessed from one side only. Aisle graphs model, among others, vineyards or warehouses. Each aisle-graph vertex is associated with a reward that a robot obtains when it visits the vertex itself. As the energy of the robot is limited, only a subset of vertices can be visited with a fully charged battery. The objective is to maximize the total reward collected by the robot with a battery charge. We first propose an optimal algorithm that solves the OASP in \mathcal {O}(m^2n^2) time for aisle graphs with a single access consisting of m rows, each with n vertices. With the goal of designing faster solutions, we propose four greedy suboptimal algorithms that run in at most \mathcal {O}(mn\ (m + n)) time. For two of them, we guarantee an approximation ratio of \frac{1}{2}(1 - \frac{1}{e}), where e is the base of the natural logarithm, on the total reward by exploiting the well-known submodularity property. Experimentally, we show that these algorithms collect more than 80% of the optimal reward.
ISSN:1552-3098
1941-0468
DOI:10.1109/TRO.2021.3082021