Loading…

EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion

Generating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods [31] that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation s...

Full description

Saved in:
Bibliographic Details
Main Authors: Huang, Zehuan, Wen, Hao, Dong, Junting, Wang, Yaohui, Li, Yangguang, Chen, Xinyuan, Cao, Yan-Pei, Liang, Ding, Qiao, Yu, Dai, Bo, Sheng, Lu
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 9794
container_issue
container_start_page 9784
container_title
container_volume
creator Huang, Zehuan
Wen, Hao
Dong, Junting
Wang, Yaohui
Li, Yangguang
Chen, Xinyuan
Cao, Yan-Pei
Liang, Ding
Qiao, Yu
Dai, Bo
Sheng, Lu
description Generating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods [31] that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed and face challenges in maintaining generalizability and quality. To address this issue, we propose EpiDiff, a localized interactive multiview diffusion model. At the core of the proposed approach is to insert a lightweight epipolar attention block into the frozen diffusion model, leveraging epipolar constraints to enable cross-view interaction among feature maps of neighboring views. The newly initialized 3D modeling module preserves the original feature distribution of the diffusion model, exhibiting compatibility with a variety of base diffusion models. Experiments show that EpiDiff generates 16 multiview images in just 12 seconds, and it surpasses previous methods in quality evaluation metrics, including PSNR, SSIM and LPIPS. Additionally, EpiDiff can generate a more diverse distribution of views, improving the reconstruction quality from generated multiviews. Please see the project page at huanngzh.github.io/EpiDiff/.
doi_str_mv 10.1109/CVPR52733.2024.00934
format conference_proceeding
fullrecord <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_10655589</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>10655589</ieee_id><sourcerecordid>10655589</sourcerecordid><originalsourceid>FETCH-LOGICAL-i674-7d93517db98e0329ad2e2f9deae62c943c31ffaa3ee7436fe5655e03c04235193</originalsourceid><addsrcrecordid>eNotjlFLwzAUhaMgOGb_wR76Bzpvcpum8U3qdMJE0bLXEdsbd6Wmo9mU-eut6NOBw_k-jhAzCXMpwV5W66dnrQziXIHK5wAW8xORWGNL1IAaAYpTMVHa6MyA0eciifEdAFBJWdhyIurFjm_Y-6t0EbYuNBze0odDt-dszfSVvhzDfkuRY_rJLl31jev4m9p0pHZ954as6kPcD47DWP56DpH7cCHOvOsiJf85FfXtoq6W2erx7r66XmVcmDwzrUUtTftqSxoPWdcqUt625KhQjc2xQem9c0hkciw86ULrcdlArkbQ4lTM_rRMRJvdwB9uOG4kjDNdWvwBMp9R2g</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion</title><source>IEEE Xplore All Conference Series</source><creator>Huang, Zehuan ; Wen, Hao ; Dong, Junting ; Wang, Yaohui ; Li, Yangguang ; Chen, Xinyuan ; Cao, Yan-Pei ; Liang, Ding ; Qiao, Yu ; Dai, Bo ; Sheng, Lu</creator><creatorcontrib>Huang, Zehuan ; Wen, Hao ; Dong, Junting ; Wang, Yaohui ; Li, Yangguang ; Chen, Xinyuan ; Cao, Yan-Pei ; Liang, Ding ; Qiao, Yu ; Dai, Bo ; Sheng, Lu</creatorcontrib><description>Generating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods [31] that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed and face challenges in maintaining generalizability and quality. To address this issue, we propose EpiDiff, a localized interactive multiview diffusion model. At the core of the proposed approach is to insert a lightweight epipolar attention block into the frozen diffusion model, leveraging epipolar constraints to enable cross-view interaction among feature maps of neighboring views. The newly initialized 3D modeling module preserves the original feature distribution of the diffusion model, exhibiting compatibility with a variety of base diffusion models. Experiments show that EpiDiff generates 16 multiview images in just 12 seconds, and it surpasses previous methods in quality evaluation metrics, including PSNR, SSIM and LPIPS. Additionally, EpiDiff can generate a more diverse distribution of views, improving the reconstruction quality from generated multiviews. Please see the project page at huanngzh.github.io/EpiDiff/.</description><identifier>EISSN: 2575-7075</identifier><identifier>EISBN: 9798350353006</identifier><identifier>DOI: 10.1109/CVPR52733.2024.00934</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>3D generation ; Adaptation models ; Computer vision ; Diffusion models ; Face recognition ; Image-to-3D ; Measurement ; Multiview generation ; Solid modeling ; Three-dimensional displays</subject><ispartof>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, p.9784-9794</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/10655589$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,27925,54555,54932</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/10655589$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Huang, Zehuan</creatorcontrib><creatorcontrib>Wen, Hao</creatorcontrib><creatorcontrib>Dong, Junting</creatorcontrib><creatorcontrib>Wang, Yaohui</creatorcontrib><creatorcontrib>Li, Yangguang</creatorcontrib><creatorcontrib>Chen, Xinyuan</creatorcontrib><creatorcontrib>Cao, Yan-Pei</creatorcontrib><creatorcontrib>Liang, Ding</creatorcontrib><creatorcontrib>Qiao, Yu</creatorcontrib><creatorcontrib>Dai, Bo</creatorcontrib><creatorcontrib>Sheng, Lu</creatorcontrib><title>EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion</title><title>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</title><addtitle>CVPR</addtitle><description>Generating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods [31] that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed and face challenges in maintaining generalizability and quality. To address this issue, we propose EpiDiff, a localized interactive multiview diffusion model. At the core of the proposed approach is to insert a lightweight epipolar attention block into the frozen diffusion model, leveraging epipolar constraints to enable cross-view interaction among feature maps of neighboring views. The newly initialized 3D modeling module preserves the original feature distribution of the diffusion model, exhibiting compatibility with a variety of base diffusion models. Experiments show that EpiDiff generates 16 multiview images in just 12 seconds, and it surpasses previous methods in quality evaluation metrics, including PSNR, SSIM and LPIPS. Additionally, EpiDiff can generate a more diverse distribution of views, improving the reconstruction quality from generated multiviews. Please see the project page at huanngzh.github.io/EpiDiff/.</description><subject>3D generation</subject><subject>Adaptation models</subject><subject>Computer vision</subject><subject>Diffusion models</subject><subject>Face recognition</subject><subject>Image-to-3D</subject><subject>Measurement</subject><subject>Multiview generation</subject><subject>Solid modeling</subject><subject>Three-dimensional displays</subject><issn>2575-7075</issn><isbn>9798350353006</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2024</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotjlFLwzAUhaMgOGb_wR76Bzpvcpum8U3qdMJE0bLXEdsbd6Wmo9mU-eut6NOBw_k-jhAzCXMpwV5W66dnrQziXIHK5wAW8xORWGNL1IAaAYpTMVHa6MyA0eciifEdAFBJWdhyIurFjm_Y-6t0EbYuNBze0odDt-dszfSVvhzDfkuRY_rJLl31jev4m9p0pHZ954as6kPcD47DWP56DpH7cCHOvOsiJf85FfXtoq6W2erx7r66XmVcmDwzrUUtTftqSxoPWdcqUt625KhQjc2xQem9c0hkciw86ULrcdlArkbQ4lTM_rRMRJvdwB9uOG4kjDNdWvwBMp9R2g</recordid><startdate>20240616</startdate><enddate>20240616</enddate><creator>Huang, Zehuan</creator><creator>Wen, Hao</creator><creator>Dong, Junting</creator><creator>Wang, Yaohui</creator><creator>Li, Yangguang</creator><creator>Chen, Xinyuan</creator><creator>Cao, Yan-Pei</creator><creator>Liang, Ding</creator><creator>Qiao, Yu</creator><creator>Dai, Bo</creator><creator>Sheng, Lu</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>20240616</creationdate><title>EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion</title><author>Huang, Zehuan ; Wen, Hao ; Dong, Junting ; Wang, Yaohui ; Li, Yangguang ; Chen, Xinyuan ; Cao, Yan-Pei ; Liang, Ding ; Qiao, Yu ; Dai, Bo ; Sheng, Lu</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i674-7d93517db98e0329ad2e2f9deae62c943c31ffaa3ee7436fe5655e03c04235193</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2024</creationdate><topic>3D generation</topic><topic>Adaptation models</topic><topic>Computer vision</topic><topic>Diffusion models</topic><topic>Face recognition</topic><topic>Image-to-3D</topic><topic>Measurement</topic><topic>Multiview generation</topic><topic>Solid modeling</topic><topic>Three-dimensional displays</topic><toplevel>online_resources</toplevel><creatorcontrib>Huang, Zehuan</creatorcontrib><creatorcontrib>Wen, Hao</creatorcontrib><creatorcontrib>Dong, Junting</creatorcontrib><creatorcontrib>Wang, Yaohui</creatorcontrib><creatorcontrib>Li, Yangguang</creatorcontrib><creatorcontrib>Chen, Xinyuan</creatorcontrib><creatorcontrib>Cao, Yan-Pei</creatorcontrib><creatorcontrib>Liang, Ding</creatorcontrib><creatorcontrib>Qiao, Yu</creatorcontrib><creatorcontrib>Dai, Bo</creatorcontrib><creatorcontrib>Sheng, Lu</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Electronic Library (IEL)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Huang, Zehuan</au><au>Wen, Hao</au><au>Dong, Junting</au><au>Wang, Yaohui</au><au>Li, Yangguang</au><au>Chen, Xinyuan</au><au>Cao, Yan-Pei</au><au>Liang, Ding</au><au>Qiao, Yu</au><au>Dai, Bo</au><au>Sheng, Lu</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion</atitle><btitle>2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)</btitle><stitle>CVPR</stitle><date>2024-06-16</date><risdate>2024</risdate><spage>9784</spage><epage>9794</epage><pages>9784-9794</pages><eissn>2575-7075</eissn><eisbn>9798350353006</eisbn><coden>IEEPAD</coden><abstract>Generating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods [31] that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed and face challenges in maintaining generalizability and quality. To address this issue, we propose EpiDiff, a localized interactive multiview diffusion model. At the core of the proposed approach is to insert a lightweight epipolar attention block into the frozen diffusion model, leveraging epipolar constraints to enable cross-view interaction among feature maps of neighboring views. The newly initialized 3D modeling module preserves the original feature distribution of the diffusion model, exhibiting compatibility with a variety of base diffusion models. Experiments show that EpiDiff generates 16 multiview images in just 12 seconds, and it surpasses previous methods in quality evaluation metrics, including PSNR, SSIM and LPIPS. Additionally, EpiDiff can generate a more diverse distribution of views, improving the reconstruction quality from generated multiviews. Please see the project page at huanngzh.github.io/EpiDiff/.</abstract><pub>IEEE</pub><doi>10.1109/CVPR52733.2024.00934</doi><tpages>11</tpages><oa>free_for_read</oa></addata></record>
fulltext fulltext_linktorsrc
identifier EISSN: 2575-7075
ispartof 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, p.9784-9794
issn 2575-7075
language eng
recordid cdi_ieee_primary_10655589
source IEEE Xplore All Conference Series
subjects 3D generation
Adaptation models
Computer vision
Diffusion models
Face recognition
Image-to-3D
Measurement
Multiview generation
Solid modeling
Three-dimensional displays
title EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T20%3A42%3A20IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=EpiDiff:%20Enhancing%20Multi-View%20Synthesis%20via%20Localized%20Epipolar-Constrained%20Diffusion&rft.btitle=2024%20IEEE/CVF%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20(CVPR)&rft.au=Huang,%20Zehuan&rft.date=2024-06-16&rft.spage=9784&rft.epage=9794&rft.pages=9784-9794&rft.eissn=2575-7075&rft.coden=IEEPAD&rft_id=info:doi/10.1109/CVPR52733.2024.00934&rft.eisbn=9798350353006&rft_dat=%3Cieee_CHZPO%3E10655589%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i674-7d93517db98e0329ad2e2f9deae62c943c31ffaa3ee7436fe5655e03c04235193%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=10655589&rfr_iscdi=true