Loading…

Ground-Satellite Coupling for Cross-View Geolocation Combined With Multiscale Fusion of Spatial Features

Geolocating a street-view image by matching it with geotagged satellite images is crucial for location assessment. However, the perspective disparity between satellite and street-view images presents significant challenges. To address the issue, the mainstream approach is to convert satellite images...

Full description

Saved in:
Bibliographic Details
Published in:IEEE geoscience and remote sensing letters 2024, Vol.21, p.1-5
Main Authors: Zhao, Luying, Zhou, Yang, Hu, Xiaofei, Huang, Gaoshuang, Zhang, Chenglong, Gan, Wenjian, Hou, Mingbo
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Geolocating a street-view image by matching it with geotagged satellite images is crucial for location assessment. However, the perspective disparity between satellite and street-view images presents significant challenges. To address the issue, the mainstream approach is to convert satellite images to ground-level perspective. So, the reference satellite images not only required the center but also required the coverage to be consistent with the street-view images and sometimes even required consistency in the north direction, which is difficult to achieve in the practical applications. This letter introduces a ground-breaking method for converting ground-level images to satellite images. We effectively couple ground images and satellite images by establishing a hemispheric projection relationship to achieve conversion from ground images to satellite perspectives, thus solving the problem of huge perspective differences in cross-view geolocation (CVG). In addition, we propose the multiscale fusion of a spatial features mechanism to enhance deep feature representations and improve recall and geolocation accuracy. Our proposed methodology markedly enhances its practical utility and performance, attaining remarkable Top-1 accuracy rates of 75.08% on the CVACT_val dataset and 39.92% on the CVACT_test dataset, respectively, following a stringent quantitative appraisal. This advancement constitutes a substantial contribution to the progression of research in the field of CVG.
ISSN:1545-598X
1558-0571
DOI:10.1109/LGRS.2024.3388574