Loading…

Location retrieval using qualitative place signatures of visible landmarks

Location retrieval based on visual information is to retrieve the location of an agent (e.g. human, robot) or the area they see by comparing their observations with a certain representation of the environment. Existing methods generally treat the problem as a content-based image retrieval problem an...

Full description

Saved in:
Bibliographic Details
Published in:International journal of geographical information science : IJGIS 2024-08, Vol.38 (8), p.1633-1677
Main Authors: Wei, Lijun, Gouet-Brunet, Valérie, Cohn, Anthony G.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Location retrieval based on visual information is to retrieve the location of an agent (e.g. human, robot) or the area they see by comparing their observations with a certain representation of the environment. Existing methods generally treat the problem as a content-based image retrieval problem and have demonstrated promising results in terms of localization accuracy. However, these methods are challenging to scale up due to the volume of reference data involved; and the image descriptions might not be easily understandable/communicable for humans to describe surroundings. Considering that humans often use less precise but easily produced qualitative spatial language and high-level semantic landmarks when describing an environment, a coarse-to-fine qualitative location retrieval method is proposed in this work to quickly narrow down the initial location of an agent by exploiting the available information in large-scale open data. This approach describes and indexes a location/place using the perceived qualitative spatial relations between ordered pairs of co-visible landmarks from the perspective of viewers, termed as 'qualitative place signatures' (QPS). The usability and effectiveness of the proposed method were evaluated using openly available datasets, together with simulated observations by considering different types perception errors.
ISSN:1365-8816
1362-3087
1365-8824
DOI:10.1080/13658816.2024.2348736