Loading…

A vision-aided RTK ambiguity resolution method by map lane matching for intelligent vehicle in urban environment

Despite the high-precision performance of GNSS real-time kinematic (RTK) in many cases, harsh signal environments still lead to ambiguity-fixed failure and worse positioning results in kinematic localization. Intelligent vehicles are equipped with cameras for perception. Visual measurements can add...

Full description

Saved in:
Bibliographic Details
Published in:International journal of digital earth 2024-12, Vol.17 (1)
Main Authors: Zhang, Hongjuan, Qian, Chuang, Li, Wenzhuo, Li, Bijun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Despite the high-precision performance of GNSS real-time kinematic (RTK) in many cases, harsh signal environments still lead to ambiguity-fixed failure and worse positioning results in kinematic localization. Intelligent vehicles are equipped with cameras for perception. Visual measurements can add new information to satellite measurements, thus improving integer ambiguity resolution (AR). Given road lane lines are stationary and their accurate positions can be previously acquired, we encode the lane lines with rectangles and integrate them into a commonly used map format. Considering the ambiguous and repetitive land lines, a map-based ambiguous lane matching method is proposed to find all possible rectangles where a vehicle may locate. And a vision-based relative positioning is then applied by measuring the relative position between the lane line corner and the vehicle. Finally, the two results are introduced into RTK single-epoch AR to find the most accurate ambiguity estimations. To extensively evaluate our method, we compare it with a tightly integrated system of GNSS/INS (GINS) and a well-known tightly coupled GNSS-Visual-Inertial fusion system (GVINS) in simulated urban environments and a real dense urban environment. Experimental results prove the superiority of our method over GINS and GVINS in success rates, fixed rates and pose accuracy.
ISSN:1753-8947
1753-8955
DOI:10.1080/17538947.2024.2383479