Loading…

DNA-Depth: A Frequency-Based Day-Night Adaptation for Monocular Depth Estimation

Autonomous driving necessitates ensuring safety across diverse environments, particularly in challenging conditions like low-light or nighttime scenarios. As a fundamental task in autonomous driving, monocular depth estimation has garnered significant attention and discussion. However, current monoc...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on instrumentation and measurement 2023, Vol.72, p.1-12
Main Authors: Shen, Mengjiao, Wang, Zhongyi, Su, Shuai, Liu, Chengju, Chen, Qijun
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites cdi_FETCH-LOGICAL-c245t-3a45fcb565bbd3b1be2cc0fb4406fc12ef2257e53580a86626eb5ac7ff2202623
container_end_page 12
container_issue
container_start_page 1
container_title IEEE transactions on instrumentation and measurement
container_volume 72
creator Shen, Mengjiao
Wang, Zhongyi
Su, Shuai
Liu, Chengju
Chen, Qijun
description Autonomous driving necessitates ensuring safety across diverse environments, particularly in challenging conditions like low-light or nighttime scenarios. As a fundamental task in autonomous driving, monocular depth estimation has garnered significant attention and discussion. However, current monocular depth estimation methods primarily rely on daytime images, which limits their applicability to nighttime scenarios due to the substantial domain shift between daytime and nighttime styles. In this article, we propose a novel Day-Night Adaptation method (DNA-Depth) to realize monocular depth estimation in a night environment. Specifically, we simply use Fourier Transform to address the domain alignment problem. Our method does not require extra adversarial optimization but is quite effective. The simplicity of our method makes it easy to guide day-to-night domains. To the best of our knowledge, we are the first to utilize fast Fourier transformation for nighttime monocular depth estimation. Furthermore, to alleviate the problem of mobile light sources, we utilize an unsupervised joint learning framework for depth, optical flow, and ego-motion estimation in an end-to-end manner, which is coupled by 3-D geometry cues. Our model can simultaneously reason about the camera motion, the depth of a static background, and the optical flow of moving objects. Extensive experiments on the Oxford RobotCar, nuScenes, and Synthia datasets demonstrate the accuracy and precision of our method by comparing it with those state-of-the-art algorithms in depth estimation, both qualitatively and quantitatively.
doi_str_mv 10.1109/TIM.2023.3322498
format article
fullrecord <record><control><sourceid>proquest_ieee_</sourceid><recordid>TN_cdi_ieee_primary_10273852</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10273852</ieee_id><sourcerecordid>2881507172</sourcerecordid><originalsourceid>FETCH-LOGICAL-c245t-3a45fcb565bbd3b1be2cc0fb4406fc12ef2257e53580a86626eb5ac7ff2202623</originalsourceid><addsrcrecordid>eNpNkL1PwzAQxS0EEqWwMzBYYnY527GdsIV-QKW2MJTZclybpipJsNOh_33Tj4HppLv37t79EHqkMKAUspfldD5gwPiAc8aSLL1CPSqEIpmU7Br1AGhKskTIW3QX4wYAlExUD32NFjkZuaZdv-IcT4L727nK7smbiW6FR2ZPFuXPusX5yjStacu6wr4OeF5Xtd1tTcAnLx7Htvw9je_RjTfb6B4utY--J-Pl8IPMPt-nw3xGLEtES7hJhLeFkKIoVryghWPWgi-SBKS3lDnPmFBOcJGCSbsfpCuEscp3fWCS8T56Pu9tQt1ljq3e1LtQdSc1S1MqQFF1VMFZZUMdY3BeN6ELGvaagj5y0x03feSmL9w6y9PZUjrn_smZ4qlg_AB06mf_</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2881507172</pqid></control><display><type>article</type><title>DNA-Depth: A Frequency-Based Day-Night Adaptation for Monocular Depth Estimation</title><source>IEEE Electronic Library (IEL) Journals</source><creator>Shen, Mengjiao ; Wang, Zhongyi ; Su, Shuai ; Liu, Chengju ; Chen, Qijun</creator><creatorcontrib>Shen, Mengjiao ; Wang, Zhongyi ; Su, Shuai ; Liu, Chengju ; Chen, Qijun</creatorcontrib><description>Autonomous driving necessitates ensuring safety across diverse environments, particularly in challenging conditions like low-light or nighttime scenarios. As a fundamental task in autonomous driving, monocular depth estimation has garnered significant attention and discussion. However, current monocular depth estimation methods primarily rely on daytime images, which limits their applicability to nighttime scenarios due to the substantial domain shift between daytime and nighttime styles. In this article, we propose a novel Day-Night Adaptation method (DNA-Depth) to realize monocular depth estimation in a night environment. Specifically, we simply use Fourier Transform to address the domain alignment problem. Our method does not require extra adversarial optimization but is quite effective. The simplicity of our method makes it easy to guide day-to-night domains. To the best of our knowledge, we are the first to utilize fast Fourier transformation for nighttime monocular depth estimation. Furthermore, to alleviate the problem of mobile light sources, we utilize an unsupervised joint learning framework for depth, optical flow, and ego-motion estimation in an end-to-end manner, which is coupled by 3-D geometry cues. Our model can simultaneously reason about the camera motion, the depth of a static background, and the optical flow of moving objects. Extensive experiments on the Oxford RobotCar, nuScenes, and Synthia datasets demonstrate the accuracy and precision of our method by comparing it with those state-of-the-art algorithms in depth estimation, both qualitatively and quantitatively.</description><identifier>ISSN: 0018-9456</identifier><identifier>EISSN: 1557-9662</identifier><identifier>DOI: 10.1109/TIM.2023.3322498</identifier><identifier>CODEN: IEIMAO</identifier><language>eng</language><publisher>New York: IEEE</publisher><subject>Adaptation ; Algorithms ; Cameras ; Daytime ; Depth estimation ; domain adaptation ; dynamic environment ; Estimation ; Fast Fourier transformations ; Fourier transform ; Fourier transforms ; Frequency estimation ; Frequency-domain analysis ; Light sources ; Lighting ; monocular vision ; Motion simulation ; Night ; Optical flow ; Optical flow (image analysis) ; Optimization ; Training</subject><ispartof>IEEE transactions on instrumentation and measurement, 2023, Vol.72, p.1-12</ispartof><rights>Copyright The Institute of Electrical and Electronics Engineers, Inc. (IEEE) 2023</rights><lds50>peer_reviewed</lds50><woscitedreferencessubscribed>false</woscitedreferencessubscribed><cites>FETCH-LOGICAL-c245t-3a45fcb565bbd3b1be2cc0fb4406fc12ef2257e53580a86626eb5ac7ff2202623</cites><orcidid>0000-0002-6378-029X ; 0000-0002-2340-7634 ; 0000-0001-6144-8923 ; 0000-0001-7543-0855 ; 0000-0001-5644-1188</orcidid></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10273852$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>314,780,784,4023,27922,27923,27924,54795</link.rule.ids></links><search><creatorcontrib>Shen, Mengjiao</creatorcontrib><creatorcontrib>Wang, Zhongyi</creatorcontrib><creatorcontrib>Su, Shuai</creatorcontrib><creatorcontrib>Liu, Chengju</creatorcontrib><creatorcontrib>Chen, Qijun</creatorcontrib><title>DNA-Depth: A Frequency-Based Day-Night Adaptation for Monocular Depth Estimation</title><title>IEEE transactions on instrumentation and measurement</title><addtitle>TIM</addtitle><description>Autonomous driving necessitates ensuring safety across diverse environments, particularly in challenging conditions like low-light or nighttime scenarios. As a fundamental task in autonomous driving, monocular depth estimation has garnered significant attention and discussion. However, current monocular depth estimation methods primarily rely on daytime images, which limits their applicability to nighttime scenarios due to the substantial domain shift between daytime and nighttime styles. In this article, we propose a novel Day-Night Adaptation method (DNA-Depth) to realize monocular depth estimation in a night environment. Specifically, we simply use Fourier Transform to address the domain alignment problem. Our method does not require extra adversarial optimization but is quite effective. The simplicity of our method makes it easy to guide day-to-night domains. To the best of our knowledge, we are the first to utilize fast Fourier transformation for nighttime monocular depth estimation. Furthermore, to alleviate the problem of mobile light sources, we utilize an unsupervised joint learning framework for depth, optical flow, and ego-motion estimation in an end-to-end manner, which is coupled by 3-D geometry cues. Our model can simultaneously reason about the camera motion, the depth of a static background, and the optical flow of moving objects. Extensive experiments on the Oxford RobotCar, nuScenes, and Synthia datasets demonstrate the accuracy and precision of our method by comparing it with those state-of-the-art algorithms in depth estimation, both qualitatively and quantitatively.</description><subject>Adaptation</subject><subject>Algorithms</subject><subject>Cameras</subject><subject>Daytime</subject><subject>Depth estimation</subject><subject>domain adaptation</subject><subject>dynamic environment</subject><subject>Estimation</subject><subject>Fast Fourier transformations</subject><subject>Fourier transform</subject><subject>Fourier transforms</subject><subject>Frequency estimation</subject><subject>Frequency-domain analysis</subject><subject>Light sources</subject><subject>Lighting</subject><subject>monocular vision</subject><subject>Motion simulation</subject><subject>Night</subject><subject>Optical flow</subject><subject>Optical flow (image analysis)</subject><subject>Optimization</subject><subject>Training</subject><issn>0018-9456</issn><issn>1557-9662</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2023</creationdate><recordtype>article</recordtype><recordid>eNpNkL1PwzAQxS0EEqWwMzBYYnY527GdsIV-QKW2MJTZclybpipJsNOh_33Tj4HppLv37t79EHqkMKAUspfldD5gwPiAc8aSLL1CPSqEIpmU7Br1AGhKskTIW3QX4wYAlExUD32NFjkZuaZdv-IcT4L727nK7smbiW6FR2ZPFuXPusX5yjStacu6wr4OeF5Xtd1tTcAnLx7Htvw9je_RjTfb6B4utY--J-Pl8IPMPt-nw3xGLEtES7hJhLeFkKIoVryghWPWgi-SBKS3lDnPmFBOcJGCSbsfpCuEscp3fWCS8T56Pu9tQt1ljq3e1LtQdSc1S1MqQFF1VMFZZUMdY3BeN6ELGvaagj5y0x03feSmL9w6y9PZUjrn_smZ4qlg_AB06mf_</recordid><startdate>2023</startdate><enddate>2023</enddate><creator>Shen, Mengjiao</creator><creator>Wang, Zhongyi</creator><creator>Su, Shuai</creator><creator>Liu, Chengju</creator><creator>Chen, Qijun</creator><general>IEEE</general><general>The Institute of Electrical and Electronics Engineers, Inc. (IEEE)</general><scope>97E</scope><scope>RIA</scope><scope>RIE</scope><scope>AAYXX</scope><scope>CITATION</scope><scope>7SP</scope><scope>7U5</scope><scope>8FD</scope><scope>L7M</scope><orcidid>https://orcid.org/0000-0002-6378-029X</orcidid><orcidid>https://orcid.org/0000-0002-2340-7634</orcidid><orcidid>https://orcid.org/0000-0001-6144-8923</orcidid><orcidid>https://orcid.org/0000-0001-7543-0855</orcidid><orcidid>https://orcid.org/0000-0001-5644-1188</orcidid></search><sort><creationdate>2023</creationdate><title>DNA-Depth: A Frequency-Based Day-Night Adaptation for Monocular Depth Estimation</title><author>Shen, Mengjiao ; Wang, Zhongyi ; Su, Shuai ; Liu, Chengju ; Chen, Qijun</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-c245t-3a45fcb565bbd3b1be2cc0fb4406fc12ef2257e53580a86626eb5ac7ff2202623</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2023</creationdate><topic>Adaptation</topic><topic>Algorithms</topic><topic>Cameras</topic><topic>Daytime</topic><topic>Depth estimation</topic><topic>domain adaptation</topic><topic>dynamic environment</topic><topic>Estimation</topic><topic>Fast Fourier transformations</topic><topic>Fourier transform</topic><topic>Fourier transforms</topic><topic>Frequency estimation</topic><topic>Frequency-domain analysis</topic><topic>Light sources</topic><topic>Lighting</topic><topic>monocular vision</topic><topic>Motion simulation</topic><topic>Night</topic><topic>Optical flow</topic><topic>Optical flow (image analysis)</topic><topic>Optimization</topic><topic>Training</topic><toplevel>peer_reviewed</toplevel><toplevel>online_resources</toplevel><creatorcontrib>Shen, Mengjiao</creatorcontrib><creatorcontrib>Wang, Zhongyi</creatorcontrib><creatorcontrib>Su, Shuai</creatorcontrib><creatorcontrib>Liu, Chengju</creatorcontrib><creatorcontrib>Chen, Qijun</creatorcontrib><collection>IEEE All-Society Periodicals Package (ASPP) 2005-present</collection><collection>IEEE All-Society Periodicals Package (ASPP) 1998-Present</collection><collection>IEEE Xplore</collection><collection>CrossRef</collection><collection>Electronics &amp; Communications Abstracts</collection><collection>Solid State and Superconductivity Abstracts</collection><collection>Technology Research Database</collection><collection>Advanced Technologies Database with Aerospace</collection><jtitle>IEEE transactions on instrumentation and measurement</jtitle></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Shen, Mengjiao</au><au>Wang, Zhongyi</au><au>Su, Shuai</au><au>Liu, Chengju</au><au>Chen, Qijun</au><format>journal</format><genre>article</genre><ristype>JOUR</ristype><atitle>DNA-Depth: A Frequency-Based Day-Night Adaptation for Monocular Depth Estimation</atitle><jtitle>IEEE transactions on instrumentation and measurement</jtitle><stitle>TIM</stitle><date>2023</date><risdate>2023</risdate><volume>72</volume><spage>1</spage><epage>12</epage><pages>1-12</pages><issn>0018-9456</issn><eissn>1557-9662</eissn><coden>IEIMAO</coden><abstract>Autonomous driving necessitates ensuring safety across diverse environments, particularly in challenging conditions like low-light or nighttime scenarios. As a fundamental task in autonomous driving, monocular depth estimation has garnered significant attention and discussion. However, current monocular depth estimation methods primarily rely on daytime images, which limits their applicability to nighttime scenarios due to the substantial domain shift between daytime and nighttime styles. In this article, we propose a novel Day-Night Adaptation method (DNA-Depth) to realize monocular depth estimation in a night environment. Specifically, we simply use Fourier Transform to address the domain alignment problem. Our method does not require extra adversarial optimization but is quite effective. The simplicity of our method makes it easy to guide day-to-night domains. To the best of our knowledge, we are the first to utilize fast Fourier transformation for nighttime monocular depth estimation. Furthermore, to alleviate the problem of mobile light sources, we utilize an unsupervised joint learning framework for depth, optical flow, and ego-motion estimation in an end-to-end manner, which is coupled by 3-D geometry cues. Our model can simultaneously reason about the camera motion, the depth of a static background, and the optical flow of moving objects. Extensive experiments on the Oxford RobotCar, nuScenes, and Synthia datasets demonstrate the accuracy and precision of our method by comparing it with those state-of-the-art algorithms in depth estimation, both qualitatively and quantitatively.</abstract><cop>New York</cop><pub>IEEE</pub><doi>10.1109/TIM.2023.3322498</doi><tpages>12</tpages><orcidid>https://orcid.org/0000-0002-6378-029X</orcidid><orcidid>https://orcid.org/0000-0002-2340-7634</orcidid><orcidid>https://orcid.org/0000-0001-6144-8923</orcidid><orcidid>https://orcid.org/0000-0001-7543-0855</orcidid><orcidid>https://orcid.org/0000-0001-5644-1188</orcidid></addata></record>
fulltext fulltext
identifier ISSN: 0018-9456
ispartof IEEE transactions on instrumentation and measurement, 2023, Vol.72, p.1-12
issn 0018-9456
1557-9662
language eng
recordid cdi_ieee_primary_10273852
source IEEE Electronic Library (IEL) Journals
subjects Adaptation
Algorithms
Cameras
Daytime
Depth estimation
domain adaptation
dynamic environment
Estimation
Fast Fourier transformations
Fourier transform
Fourier transforms
Frequency estimation
Frequency-domain analysis
Light sources
Lighting
monocular vision
Motion simulation
Night
Optical flow
Optical flow (image analysis)
Optimization
Training
title DNA-Depth: A Frequency-Based Day-Night Adaptation for Monocular Depth Estimation
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-08T14%3A27%3A10IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest_ieee_&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.genre=article&rft.atitle=DNA-Depth:%20A%20Frequency-Based%20Day-Night%20Adaptation%20for%20Monocular%20Depth%20Estimation&rft.jtitle=IEEE%20transactions%20on%20instrumentation%20and%20measurement&rft.au=Shen,%20Mengjiao&rft.date=2023&rft.volume=72&rft.spage=1&rft.epage=12&rft.pages=1-12&rft.issn=0018-9456&rft.eissn=1557-9662&rft.coden=IEIMAO&rft_id=info:doi/10.1109/TIM.2023.3322498&rft_dat=%3Cproquest_ieee_%3E2881507172%3C/proquest_ieee_%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-c245t-3a45fcb565bbd3b1be2cc0fb4406fc12ef2257e53580a86626eb5ac7ff2202623%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2881507172&rft_id=info:pmid/&rft_ieee_id=10273852&rfr_iscdi=true