Loading…
Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field Video Reconstruction
Dual pixels contain disparity cues arising from the defocus blur. This disparity information is useful for many vision tasks ranging from autonomous driving to 3D creative realism. However, directly estimating disparity from dual pixels is less accurate. This work hypothesizes that distilling high-p...
Saved in:
Main Authors: | , , , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 12 |
container_issue | |
container_start_page | 1 |
container_title | |
container_volume | |
creator | Garg, Aryan Mallampali, Raghav Joshi, Akshat Govindarajan, Shrisudhan Mitra, Kaushik |
description | Dual pixels contain disparity cues arising from the defocus blur. This disparity information is useful for many vision tasks ranging from autonomous driving to 3D creative realism. However, directly estimating disparity from dual pixels is less accurate. This work hypothesizes that distilling high-precision dark stereo knowledge, implicitly or explicitly, to efficient dual-pixel student networks enables faithful reconstructions. This dark knowledge distillation should also alleviate stereo-synchronization setup and calibration costs while dramatically increasing parameter and inference time efficiency. We collect the first and largest 3-view dual-pixel video dataset, dpMV, to validate our explicit dark knowledge distillation hypothesis. We show that these methods outperform purely monocular solutions, especially in challenging foreground-background separation regions using faithful guidance from dual pixels. Finally, we demonstrate an unconventional use case unlocked by dpMV and implicit dark knowledge distillation from an ensemble of teachers for Light Field (LF) video reconstruction. Our LF video reconstruction method is the fastest and most temporally consistent to date. It remains competitive in reconstruction fidelity while offering many other essential properties like high parameter efficiency, implicit disocclusion handling, zero-shot cross-dataset transfer, geometrically consistent inference on higher spatial-angular resolutions, and adaptive baseline control. All source code is available at the repository https://github.com/Aryan-Garg. |
doi_str_mv | 10.1109/ICCP61108.2024.10644854 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10644854</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10644854</ieee_id><sourcerecordid>10644854</sourcerecordid><originalsourceid>FETCH-LOGICAL-i155t-c839a092accf7a6c075c2cbb83b4ebaed60fcd44c68cd937f0d81cf8358e31033</originalsourceid><addsrcrecordid>eNo1UM1KAzEYjIJgqfsGgnmBrfnbbHKUrdVixeJPLx5KNvlSI2lTdlPUt3dFPc3AMMPMIHRByYRSoi_nTbOUA1MTRpiYUCKFUJU4QoWuteIV4ZJWFT1GIyZqVtaSy1NU9P07IYQOkmZ8hF6fMnSQyrtd-ojgNoCnoc8hRpND2mHfpS12-_sVzglPDybiZfiE2GOfOrwIm7eMZwGiw6vgIOFHsGnX5-5gf9xn6MSb2EPxh2P0Mrt-bm7LxcPNvLlalGHol0uruDZEM2Otr420pK4ss22reCugNeAk8dYJYaWyTvPaE6eo9cNCBZwSzsfo_Dc3AMB634Wt6b7W_3_wbwZ8Vgk</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field Video Reconstruction</title><source>IEEE Xplore All Conference Series</source><creator>Garg, Aryan ; Mallampali, Raghav ; Joshi, Akshat ; Govindarajan, Shrisudhan ; Mitra, Kaushik</creator><creatorcontrib>Garg, Aryan ; Mallampali, Raghav ; Joshi, Akshat ; Govindarajan, Shrisudhan ; Mitra, Kaushik</creatorcontrib><description>Dual pixels contain disparity cues arising from the defocus blur. This disparity information is useful for many vision tasks ranging from autonomous driving to 3D creative realism. However, directly estimating disparity from dual pixels is less accurate. This work hypothesizes that distilling high-precision dark stereo knowledge, implicitly or explicitly, to efficient dual-pixel student networks enables faithful reconstructions. This dark knowledge distillation should also alleviate stereo-synchronization setup and calibration costs while dramatically increasing parameter and inference time efficiency. We collect the first and largest 3-view dual-pixel video dataset, dpMV, to validate our explicit dark knowledge distillation hypothesis. We show that these methods outperform purely monocular solutions, especially in challenging foreground-background separation regions using faithful guidance from dual pixels. Finally, we demonstrate an unconventional use case unlocked by dpMV and implicit dark knowledge distillation from an ensemble of teachers for Light Field (LF) video reconstruction. Our LF video reconstruction method is the fastest and most temporally consistent to date. It remains competitive in reconstruction fidelity while offering many other essential properties like high parameter efficiency, implicit disocclusion handling, zero-shot cross-dataset transfer, geometrically consistent inference on higher spatial-angular resolutions, and adaptive baseline control. All source code is available at the repository https://github.com/Aryan-Garg.</description><identifier>EISSN: 2472-7636</identifier><identifier>EISBN: 9798350361551</identifier><identifier>DOI: 10.1109/ICCP61108.2024.10644854</identifier><language>eng</language><publisher>IEEE</publisher><subject>Dataset ; Disparity Estimation ; Dual Pixels ; Knowledge Distillation ; Knowledge engineering ; Light Field ; Light fields ; Photography ; Reconstruction algorithms ; Self-Supervision ; Source coding ; Three-dimensional displays ; Transformers ; Vision Transformers</subject><ispartof>2024 IEEE International Conference on Computational Photography (ICCP), 2024, p.1-12</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10644854$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10644854$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Garg, Aryan</creatorcontrib><creatorcontrib>Mallampali, Raghav</creatorcontrib><creatorcontrib>Joshi, Akshat</creatorcontrib><creatorcontrib>Govindarajan, Shrisudhan</creatorcontrib><creatorcontrib>Mitra, Kaushik</creatorcontrib><title>Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field Video Reconstruction</title><title>2024 IEEE International Conference on Computational Photography (ICCP)</title><addtitle>ICCP</addtitle><description>Dual pixels contain disparity cues arising from the defocus blur. This disparity information is useful for many vision tasks ranging from autonomous driving to 3D creative realism. However, directly estimating disparity from dual pixels is less accurate. This work hypothesizes that distilling high-precision dark stereo knowledge, implicitly or explicitly, to efficient dual-pixel student networks enables faithful reconstructions. This dark knowledge distillation should also alleviate stereo-synchronization setup and calibration costs while dramatically increasing parameter and inference time efficiency. We collect the first and largest 3-view dual-pixel video dataset, dpMV, to validate our explicit dark knowledge distillation hypothesis. We show that these methods outperform purely monocular solutions, especially in challenging foreground-background separation regions using faithful guidance from dual pixels. Finally, we demonstrate an unconventional use case unlocked by dpMV and implicit dark knowledge distillation from an ensemble of teachers for Light Field (LF) video reconstruction. Our LF video reconstruction method is the fastest and most temporally consistent to date. It remains competitive in reconstruction fidelity while offering many other essential properties like high parameter efficiency, implicit disocclusion handling, zero-shot cross-dataset transfer, geometrically consistent inference on higher spatial-angular resolutions, and adaptive baseline control. All source code is available at the repository https://github.com/Aryan-Garg.</description><subject>Dataset</subject><subject>Disparity Estimation</subject><subject>Dual Pixels</subject><subject>Knowledge Distillation</subject><subject>Knowledge engineering</subject><subject>Light Field</subject><subject>Light fields</subject><subject>Photography</subject><subject>Reconstruction algorithms</subject><subject>Self-Supervision</subject><subject>Source coding</subject><subject>Three-dimensional displays</subject><subject>Transformers</subject><subject>Vision Transformers</subject><issn>2472-7636</issn><isbn>9798350361551</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNo1UM1KAzEYjIJgqfsGgnmBrfnbbHKUrdVixeJPLx5KNvlSI2lTdlPUt3dFPc3AMMPMIHRByYRSoi_nTbOUA1MTRpiYUCKFUJU4QoWuteIV4ZJWFT1GIyZqVtaSy1NU9P07IYQOkmZ8hF6fMnSQyrtd-ojgNoCnoc8hRpND2mHfpS12-_sVzglPDybiZfiE2GOfOrwIm7eMZwGiw6vgIOFHsGnX5-5gf9xn6MSb2EPxh2P0Mrt-bm7LxcPNvLlalGHol0uruDZEM2Otr420pK4ss22reCugNeAk8dYJYaWyTvPaE6eo9cNCBZwSzsfo_Dc3AMB634Wt6b7W_3_wbwZ8Vgk</recordid><startdate>20240722</startdate><enddate>20240722</enddate><creator>Garg, Aryan</creator><creator>Mallampali, Raghav</creator><creator>Joshi, Akshat</creator><creator>Govindarajan, Shrisudhan</creator><creator>Mitra, Kaushik</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>20240722</creationdate><title>Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field Video Reconstruction</title><author>Garg, Aryan ; Mallampali, Raghav ; Joshi, Akshat ; Govindarajan, Shrisudhan ; Mitra, Kaushik</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i155t-c839a092accf7a6c075c2cbb83b4ebaed60fcd44c68cd937f0d81cf8358e31033</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Dataset</topic><topic>Disparity Estimation</topic><topic>Dual Pixels</topic><topic>Knowledge Distillation</topic><topic>Knowledge engineering</topic><topic>Light Field</topic><topic>Light fields</topic><topic>Photography</topic><topic>Reconstruction algorithms</topic><topic>Self-Supervision</topic><topic>Source coding</topic><topic>Three-dimensional displays</topic><topic>Transformers</topic><topic>Vision Transformers</topic><toplevel>online_resources</toplevel><creatorcontrib>Garg, Aryan</creatorcontrib><creatorcontrib>Mallampali, Raghav</creatorcontrib><creatorcontrib>Joshi, Akshat</creatorcontrib><creatorcontrib>Govindarajan, Shrisudhan</creatorcontrib><creatorcontrib>Mitra, Kaushik</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library Online</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Garg, Aryan</au><au>Mallampali, Raghav</au><au>Joshi, Akshat</au><au>Govindarajan, Shrisudhan</au><au>Mitra, Kaushik</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field Video Reconstruction</atitle><btitle>2024 IEEE International Conference on Computational Photography (ICCP)</btitle><stitle>ICCP</stitle><date>2024-07-22</date><risdate>2024</risdate><spage>1</spage><epage>12</epage><pages>1-12</pages><eissn>2472-7636</eissn><eisbn>9798350361551</eisbn><abstract>Dual pixels contain disparity cues arising from the defocus blur. This disparity information is useful for many vision tasks ranging from autonomous driving to 3D creative realism. However, directly estimating disparity from dual pixels is less accurate. This work hypothesizes that distilling high-precision dark stereo knowledge, implicitly or explicitly, to efficient dual-pixel student networks enables faithful reconstructions. This dark knowledge distillation should also alleviate stereo-synchronization setup and calibration costs while dramatically increasing parameter and inference time efficiency. We collect the first and largest 3-view dual-pixel video dataset, dpMV, to validate our explicit dark knowledge distillation hypothesis. We show that these methods outperform purely monocular solutions, especially in challenging foreground-background separation regions using faithful guidance from dual pixels. Finally, we demonstrate an unconventional use case unlocked by dpMV and implicit dark knowledge distillation from an ensemble of teachers for Light Field (LF) video reconstruction. Our LF video reconstruction method is the fastest and most temporally consistent to date. It remains competitive in reconstruction fidelity while offering many other essential properties like high parameter efficiency, implicit disocclusion handling, zero-shot cross-dataset transfer, geometrically consistent inference on higher spatial-angular resolutions, and adaptive baseline control. All source code is available at the repository https://github.com/Aryan-Garg.</abstract><pub>IEEE</pub><doi>10.1109/ICCP61108.2024.10644854</doi><tpages>12</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2472-7636 |
ispartof | 2024 IEEE International Conference on Computational Photography (ICCP), 2024, p.1-12 |
issn | 2472-7636 |
language | eng |
recordid | cdi_ieee_primary_10644854 |
source | IEEE Xplore All Conference Series |
subjects | Dataset Disparity Estimation Dual Pixels Knowledge Distillation Knowledge engineering Light Field Light fields Photography Reconstruction algorithms Self-Supervision Source coding Three-dimensional displays Transformers Vision Transformers |
title | Stereo-Knowledge Distillation from dpMV to Dual Pixels for Light Field Video Reconstruction |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-23T00%3A55%3A19IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Stereo-Knowledge%20Distillation%20from%20dpMV%20to%20Dual%20Pixels%20for%20Light%20Field%20Video%20Reconstruction&rft.btitle=2024%20IEEE%20International%20Conference%20on%20Computational%20Photography%20(ICCP)&rft.au=Garg,%20Aryan&rft.date=2024-07-22&rft.spage=1&rft.epage=12&rft.pages=1-12&rft.eissn=2472-7636&rft_id=info:doi/10.1109/ICCP61108.2024.10644854&rft.eisbn=9798350361551&rft_dat=%3Cieee_CHZPO%3E10644854%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i155t-c839a092accf7a6c075c2cbb83b4ebaed60fcd44c68cd937f0d81cf8358e31033%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10644854&rfr_iscdi=true |