Loading…
Visual Localization of Intersections on Autonomous Vehicles Based on HD Map
The increasing number of autonomous vehicles will result in a safety factor that must be maintained. One of the crucial prerequisites for this is proper localization, especially at intersections where most road accidents occur each year. This research is proposed to improve the localization process,...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 154 |
container_issue | |
container_start_page | 150 |
container_title | |
container_volume | |
creator | Utsula, Bizza S. Ashedananta, Muhammad D. Azis, Nadana A. Nazaruddin, Yul Y. Nadhira, Vebi |
description | The increasing number of autonomous vehicles will result in a safety factor that must be maintained. One of the crucial prerequisites for this is proper localization, especially at intersections where most road accidents occur each year. This research is proposed to improve the localization process, which has drawbacks such as the inaccurate Global Positioning System (GPS) when satellite signals are blocked, and Light Detection and Ranging (LiDAR), which is expensive and needs heavy computing time. This is done by implementing the High Definition Map (HD Map) method, which maps the condition of an area that is stored as memory on an autonomous vehicle computer. The proposed system uses a monocular camera to be processed into a semantic segmentation feature map at the pixel level, then uses point cloud to reconstruct a 3D intersection model, which is then segmented semantically as well so that position data is obtained after merging using the occupancy grid and scoring function. The results of this CNN-based approach localization produce less accuracy and Mean Intersection over Union (MIoU) values with 60% average accuracy and 62.3 MioU when compared to GPS data and other 3D semantic segmentation systems, but the proposed approach can save costs and computational burden. |
doi_str_mv | 10.23919/ICCAS59377.2023.10316841 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10316841</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10316841</ieee_id><sourcerecordid>10316841</sourcerecordid><originalsourceid>FETCH-LOGICAL-i119t-1ea08d8cb7731754ff074b0f0b7a19bce73064351edcb33615d3d16c5e9c91f63</originalsourceid><addsrcrecordid>eNo1j8FOwzAQRA0SEqX0DziYD0jZ9SZ2fAyB0ooiDkCvleNshFHaVHFygK-HCjiN5o30pBHiGmGuyKK9WZVl8ZJZMmauQNEcgVDnKZ6Ii9xaUpgpk5-KidKpSsgCnotZjB8AQApS0PlEPG5CHF0r1513bfhyQ-j2smvkaj9wH9kfe5Q_rBiHbt_tujHKDb8H33KUty5yfRyXd_LJHS7FWePayLO_nIq3xf1ruUzWzw-rslgnAdEOCbKDvM59ZQyhydKmAZNW0EBlHNrKsyHQKWXIta-INGY11ah9xtZbbDRNxdWvNzDz9tCHnes_t__n6Rug_U7V</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Visual Localization of Intersections on Autonomous Vehicles Based on HD Map</title><source>IEEE Xplore All Conference Series</source><creator>Utsula, Bizza S. ; Ashedananta, Muhammad D. ; Azis, Nadana A. ; Nazaruddin, Yul Y. ; Nadhira, Vebi</creator><creatorcontrib>Utsula, Bizza S. ; Ashedananta, Muhammad D. ; Azis, Nadana A. ; Nazaruddin, Yul Y. ; Nadhira, Vebi</creatorcontrib><description>The increasing number of autonomous vehicles will result in a safety factor that must be maintained. One of the crucial prerequisites for this is proper localization, especially at intersections where most road accidents occur each year. This research is proposed to improve the localization process, which has drawbacks such as the inaccurate Global Positioning System (GPS) when satellite signals are blocked, and Light Detection and Ranging (LiDAR), which is expensive and needs heavy computing time. This is done by implementing the High Definition Map (HD Map) method, which maps the condition of an area that is stored as memory on an autonomous vehicle computer. The proposed system uses a monocular camera to be processed into a semantic segmentation feature map at the pixel level, then uses point cloud to reconstruct a 3D intersection model, which is then segmented semantically as well so that position data is obtained after merging using the occupancy grid and scoring function. The results of this CNN-based approach localization produce less accuracy and Mean Intersection over Union (MIoU) values with 60% average accuracy and 62.3 MioU when compared to GPS data and other 3D semantic segmentation systems, but the proposed approach can save costs and computational burden.</description><identifier>EISSN: 2642-3901</identifier><identifier>EISBN: 8993215278</identifier><identifier>EISBN: 9788993215274</identifier><identifier>DOI: 10.23919/ICCAS59377.2023.10316841</identifier><language>eng</language><publisher>ICROS</publisher><subject>autonomous vehicle ; Cameras ; HD Map ; localization system ; Location awareness ; monocular camera ; occupancy grid ; point cloud ; Point cloud compression ; Semantic segmentation ; Three-dimensional displays ; Visualization ; Wheels</subject><ispartof>2023 23rd International Conference on Control, Automation and Systems (ICCAS), 2023, p.150-154</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10316841$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,777,781,786,787,23911,23912,25121,27906,54536,54913</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10316841$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Utsula, Bizza S.</creatorcontrib><creatorcontrib>Ashedananta, Muhammad D.</creatorcontrib><creatorcontrib>Azis, Nadana A.</creatorcontrib><creatorcontrib>Nazaruddin, Yul Y.</creatorcontrib><creatorcontrib>Nadhira, Vebi</creatorcontrib><title>Visual Localization of Intersections on Autonomous Vehicles Based on HD Map</title><title>2023 23rd International Conference on Control, Automation and Systems (ICCAS)</title><addtitle>ICCAS</addtitle><description>The increasing number of autonomous vehicles will result in a safety factor that must be maintained. One of the crucial prerequisites for this is proper localization, especially at intersections where most road accidents occur each year. This research is proposed to improve the localization process, which has drawbacks such as the inaccurate Global Positioning System (GPS) when satellite signals are blocked, and Light Detection and Ranging (LiDAR), which is expensive and needs heavy computing time. This is done by implementing the High Definition Map (HD Map) method, which maps the condition of an area that is stored as memory on an autonomous vehicle computer. The proposed system uses a monocular camera to be processed into a semantic segmentation feature map at the pixel level, then uses point cloud to reconstruct a 3D intersection model, which is then segmented semantically as well so that position data is obtained after merging using the occupancy grid and scoring function. The results of this CNN-based approach localization produce less accuracy and Mean Intersection over Union (MIoU) values with 60% average accuracy and 62.3 MioU when compared to GPS data and other 3D semantic segmentation systems, but the proposed approach can save costs and computational burden.</description><subject>autonomous vehicle</subject><subject>Cameras</subject><subject>HD Map</subject><subject>localization system</subject><subject>Location awareness</subject><subject>monocular camera</subject><subject>occupancy grid</subject><subject>point cloud</subject><subject>Point cloud compression</subject><subject>Semantic segmentation</subject><subject>Three-dimensional displays</subject><subject>Visualization</subject><subject>Wheels</subject><issn>2642-3901</issn><isbn>8993215278</isbn><isbn>9788993215274</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2023</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1j8FOwzAQRA0SEqX0DziYD0jZ9SZ2fAyB0ooiDkCvleNshFHaVHFygK-HCjiN5o30pBHiGmGuyKK9WZVl8ZJZMmauQNEcgVDnKZ6Ii9xaUpgpk5-KidKpSsgCnotZjB8AQApS0PlEPG5CHF0r1513bfhyQ-j2smvkaj9wH9kfe5Q_rBiHbt_tujHKDb8H33KUty5yfRyXd_LJHS7FWePayLO_nIq3xf1ruUzWzw-rslgnAdEOCbKDvM59ZQyhydKmAZNW0EBlHNrKsyHQKWXIta-INGY11ah9xtZbbDRNxdWvNzDz9tCHnes_t__n6Rug_U7V</recordid><startdate>20231017</startdate><enddate>20231017</enddate><creator>Utsula, Bizza S.</creator><creator>Ashedananta, Muhammad D.</creator><creator>Azis, Nadana A.</creator><creator>Nazaruddin, Yul Y.</creator><creator>Nadhira, Vebi</creator><general>ICROS</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20231017</creationdate><title>Visual Localization of Intersections on Autonomous Vehicles Based on HD Map</title><author>Utsula, Bizza S. ; Ashedananta, Muhammad D. ; Azis, Nadana A. ; Nazaruddin, Yul Y. ; Nadhira, Vebi</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i119t-1ea08d8cb7731754ff074b0f0b7a19bce73064351edcb33615d3d16c5e9c91f63</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2023</creationdate><topic>autonomous vehicle</topic><topic>Cameras</topic><topic>HD Map</topic><topic>localization system</topic><topic>Location awareness</topic><topic>monocular camera</topic><topic>occupancy grid</topic><topic>point cloud</topic><topic>Point cloud compression</topic><topic>Semantic segmentation</topic><topic>Three-dimensional displays</topic><topic>Visualization</topic><topic>Wheels</topic><toplevel>online_resources</toplevel><creatorcontrib>Utsula, Bizza S.</creatorcontrib><creatorcontrib>Ashedananta, Muhammad D.</creatorcontrib><creatorcontrib>Azis, Nadana A.</creatorcontrib><creatorcontrib>Nazaruddin, Yul Y.</creatorcontrib><creatorcontrib>Nadhira, Vebi</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Utsula, Bizza S.</au><au>Ashedananta, Muhammad D.</au><au>Azis, Nadana A.</au><au>Nazaruddin, Yul Y.</au><au>Nadhira, Vebi</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Visual Localization of Intersections on Autonomous Vehicles Based on HD Map</atitle><btitle>2023 23rd International Conference on Control, Automation and Systems (ICCAS)</btitle><stitle>ICCAS</stitle><date>2023-10-17</date><risdate>2023</risdate><spage>150</spage><epage>154</epage><pages>150-154</pages><eissn>2642-3901</eissn><eisbn>8993215278</eisbn><eisbn>9788993215274</eisbn><abstract>The increasing number of autonomous vehicles will result in a safety factor that must be maintained. One of the crucial prerequisites for this is proper localization, especially at intersections where most road accidents occur each year. This research is proposed to improve the localization process, which has drawbacks such as the inaccurate Global Positioning System (GPS) when satellite signals are blocked, and Light Detection and Ranging (LiDAR), which is expensive and needs heavy computing time. This is done by implementing the High Definition Map (HD Map) method, which maps the condition of an area that is stored as memory on an autonomous vehicle computer. The proposed system uses a monocular camera to be processed into a semantic segmentation feature map at the pixel level, then uses point cloud to reconstruct a 3D intersection model, which is then segmented semantically as well so that position data is obtained after merging using the occupancy grid and scoring function. The results of this CNN-based approach localization produce less accuracy and Mean Intersection over Union (MIoU) values with 60% average accuracy and 62.3 MioU when compared to GPS data and other 3D semantic segmentation systems, but the proposed approach can save costs and computational burden.</abstract><pub>ICROS</pub><doi>10.23919/ICCAS59377.2023.10316841</doi><tpages>5</tpages></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2642-3901 |
ispartof | 2023 23rd International Conference on Control, Automation and Systems (ICCAS), 2023, p.150-154 |
issn | 2642-3901 |
language | eng |
recordid | cdi_ieee_primary_10316841 |
source | IEEE Xplore All Conference Series |
subjects | autonomous vehicle Cameras HD Map localization system Location awareness monocular camera occupancy grid point cloud Point cloud compression Semantic segmentation Three-dimensional displays Visualization Wheels |
title | Visual Localization of Intersections on Autonomous Vehicles Based on HD Map |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-19T20%3A54%3A42IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Visual%20Localization%20of%20Intersections%20on%20Autonomous%20Vehicles%20Based%20on%20HD%20Map&rft.btitle=2023%2023rd%20International%20Conference%20on%20Control,%20Automation%20and%20Systems%20(ICCAS)&rft.au=Utsula,%20Bizza%20S.&rft.date=2023-10-17&rft.spage=150&rft.epage=154&rft.pages=150-154&rft.eissn=2642-3901&rft_id=info:doi/10.23919/ICCAS59377.2023.10316841&rft.eisbn=8993215278&rft.eisbn_list=9788993215274&rft_dat=%3Cieee_CHZPO%3E10316841%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i119t-1ea08d8cb7731754ff074b0f0b7a19bce73064351edcb33615d3d16c5e9c91f63%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10316841&rfr_iscdi=true |