Loading…

EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion

Generating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed...

Full description

Saved in:
Bibliographic Details
Published in:arXiv.org 2024-04
Main Authors: Huang, Zehuan, Wen, Hao, Dong, Junting, Wang, Yaohui, Li, Yangguang, Chen, Xinyuan, Yan-Pei, Cao, Ding, Liang, Yu, Qiao, Dai, Bo, Lu, Sheng
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page
container_issue
container_start_page
container_title arXiv.org
container_volume
creator Huang, Zehuan
Wen, Hao
Dong, Junting
Wang, Yaohui
Li, Yangguang
Chen, Xinyuan
Yan-Pei, Cao
Ding, Liang
Yu, Qiao
Dai, Bo
Lu, Sheng
description Generating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed and face challenges in maintaining generalizability and quality. To address this issue, we propose EpiDiff, a localized interactive multiview diffusion model. At the core of the proposed approach is to insert a lightweight epipolar attention block into the frozen diffusion model, leveraging epipolar constraints to enable cross-view interaction among feature maps of neighboring views. The newly initialized 3D modeling module preserves the original feature distribution of the diffusion model, exhibiting compatibility with a variety of base diffusion models. Experiments show that EpiDiff generates 16 multiview images in just 12 seconds, and it surpasses previous methods in quality evaluation metrics, including PSNR, SSIM and LPIPS. Additionally, EpiDiff can generate a more diverse distribution of views, improving the reconstruction quality from generated multiviews. Please see our project page at https://huanngzh.github.io/EpiDiff/.
format article
fullrecord <record><control><sourceid>proquest</sourceid><recordid>TN_cdi_proquest_journals_2901356357</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><sourcerecordid>2901356357</sourcerecordid><originalsourceid>FETCH-proquest_journals_29013563573</originalsourceid><addsrcrecordid>eNqNjE0KwjAYRIMgWLR3CLgOtIlp1W2tuNCVxW0JNbVfCUnNj6Knt4IHcDUwb95MUEQZS8l6RekMxc71SZLQLKecswhV5QA7aNstLnUndAP6hk9BeSAXkE98fmnfSQcOP0Dgo2mEgre84tEajBKWFEY7bwXosfz-BAdGL9C0FcrJ-JdztNyXVXEggzX3IJ2vexOsHlFNN0nKeMZ4zv5bfQBbrUBl</addsrcrecordid><sourcetype>Aggregation Database</sourcetype><iscdi>true</iscdi><recordtype>article</recordtype><pqid>2901356357</pqid></control><display><type>article</type><title>EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion</title><source>ProQuest Publicly Available Content database</source><creator>Huang, Zehuan ; Wen, Hao ; Dong, Junting ; Wang, Yaohui ; Li, Yangguang ; Chen, Xinyuan ; Yan-Pei, Cao ; Ding, Liang ; Yu, Qiao ; Dai, Bo ; Lu, Sheng</creator><creatorcontrib>Huang, Zehuan ; Wen, Hao ; Dong, Junting ; Wang, Yaohui ; Li, Yangguang ; Chen, Xinyuan ; Yan-Pei, Cao ; Ding, Liang ; Yu, Qiao ; Dai, Bo ; Lu, Sheng</creatorcontrib><description>Generating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed and face challenges in maintaining generalizability and quality. To address this issue, we propose EpiDiff, a localized interactive multiview diffusion model. At the core of the proposed approach is to insert a lightweight epipolar attention block into the frozen diffusion model, leveraging epipolar constraints to enable cross-view interaction among feature maps of neighboring views. The newly initialized 3D modeling module preserves the original feature distribution of the diffusion model, exhibiting compatibility with a variety of base diffusion models. Experiments show that EpiDiff generates 16 multiview images in just 12 seconds, and it surpasses previous methods in quality evaluation metrics, including PSNR, SSIM and LPIPS. Additionally, EpiDiff can generate a more diverse distribution of views, improving the reconstruction quality from generated multiviews. Please see our project page at https://huanngzh.github.io/EpiDiff/.</description><identifier>EISSN: 2331-8422</identifier><language>eng</language><publisher>Ithaca: Cornell University Library, arXiv.org</publisher><subject>Constraint modelling ; Diffusion barriers ; Feature maps ; Finite element method ; Image enhancement ; Image reconstruction ; Quality assessment ; Three dimensional models</subject><ispartof>arXiv.org, 2024-04</ispartof><rights>2024. This work is published under http://creativecommons.org/licenses/by/4.0/ (the “License”). Notwithstanding the ProQuest Terms and Conditions, you may use this content in accordance with the terms of the License.</rights><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://www.proquest.com/docview/2901356357?pq-origsite=primo$$EHTML$$P50$$Gproquest$$Hfree_for_read</linktohtml><link.rule.ids>780,784,25753,37012,44590</link.rule.ids></links><search><creatorcontrib>Huang, Zehuan</creatorcontrib><creatorcontrib>Wen, Hao</creatorcontrib><creatorcontrib>Dong, Junting</creatorcontrib><creatorcontrib>Wang, Yaohui</creatorcontrib><creatorcontrib>Li, Yangguang</creatorcontrib><creatorcontrib>Chen, Xinyuan</creatorcontrib><creatorcontrib>Yan-Pei, Cao</creatorcontrib><creatorcontrib>Ding, Liang</creatorcontrib><creatorcontrib>Yu, Qiao</creatorcontrib><creatorcontrib>Dai, Bo</creatorcontrib><creatorcontrib>Lu, Sheng</creatorcontrib><title>EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion</title><title>arXiv.org</title><description>Generating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed and face challenges in maintaining generalizability and quality. To address this issue, we propose EpiDiff, a localized interactive multiview diffusion model. At the core of the proposed approach is to insert a lightweight epipolar attention block into the frozen diffusion model, leveraging epipolar constraints to enable cross-view interaction among feature maps of neighboring views. The newly initialized 3D modeling module preserves the original feature distribution of the diffusion model, exhibiting compatibility with a variety of base diffusion models. Experiments show that EpiDiff generates 16 multiview images in just 12 seconds, and it surpasses previous methods in quality evaluation metrics, including PSNR, SSIM and LPIPS. Additionally, EpiDiff can generate a more diverse distribution of views, improving the reconstruction quality from generated multiviews. Please see our project page at https://huanngzh.github.io/EpiDiff/.</description><subject>Constraint modelling</subject><subject>Diffusion barriers</subject><subject>Feature maps</subject><subject>Finite element method</subject><subject>Image enhancement</subject><subject>Image reconstruction</subject><subject>Quality assessment</subject><subject>Three dimensional models</subject><issn>2331-8422</issn><fulltext>true</fulltext><rsrctype>article</rsrctype><creationdate>2024</creationdate><recordtype>article</recordtype><sourceid>PIMPY</sourceid><recordid>eNqNjE0KwjAYRIMgWLR3CLgOtIlp1W2tuNCVxW0JNbVfCUnNj6Knt4IHcDUwb95MUEQZS8l6RekMxc71SZLQLKecswhV5QA7aNstLnUndAP6hk9BeSAXkE98fmnfSQcOP0Dgo2mEgre84tEajBKWFEY7bwXosfz-BAdGL9C0FcrJ-JdztNyXVXEggzX3IJ2vexOsHlFNN0nKeMZ4zv5bfQBbrUBl</recordid><startdate>20240402</startdate><enddate>20240402</enddate><creator>Huang, Zehuan</creator><creator>Wen, Hao</creator><creator>Dong, Junting</creator><creator>Wang, Yaohui</creator><creator>Li, Yangguang</creator><creator>Chen, Xinyuan</creator><creator>Yan-Pei, Cao</creator><creator>Ding, Liang</creator><creator>Yu, Qiao</creator><creator>Dai, Bo</creator><creator>Lu, Sheng</creator><general>Cornell University Library, arXiv.org</general><scope>8FE</scope><scope>8FG</scope><scope>ABJCF</scope><scope>ABUWG</scope><scope>AFKRA</scope><scope>AZQEC</scope><scope>BENPR</scope><scope>BGLVJ</scope><scope>CCPQU</scope><scope>DWQXO</scope><scope>HCIFZ</scope><scope>L6V</scope><scope>M7S</scope><scope>PIMPY</scope><scope>PQEST</scope><scope>PQQKQ</scope><scope>PQUKI</scope><scope>PRINS</scope><scope>PTHSS</scope></search><sort><creationdate>20240402</creationdate><title>EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion</title><author>Huang, Zehuan ; Wen, Hao ; Dong, Junting ; Wang, Yaohui ; Li, Yangguang ; Chen, Xinyuan ; Yan-Pei, Cao ; Ding, Liang ; Yu, Qiao ; Dai, Bo ; Lu, Sheng</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-proquest_journals_29013563573</frbrgroupid><rsrctype>articles</rsrctype><prefilter>articles</prefilter><language>eng</language><creationdate>2024</creationdate><topic>Constraint modelling</topic><topic>Diffusion barriers</topic><topic>Feature maps</topic><topic>Finite element method</topic><topic>Image enhancement</topic><topic>Image reconstruction</topic><topic>Quality assessment</topic><topic>Three dimensional models</topic><toplevel>online_resources</toplevel><creatorcontrib>Huang, Zehuan</creatorcontrib><creatorcontrib>Wen, Hao</creatorcontrib><creatorcontrib>Dong, Junting</creatorcontrib><creatorcontrib>Wang, Yaohui</creatorcontrib><creatorcontrib>Li, Yangguang</creatorcontrib><creatorcontrib>Chen, Xinyuan</creatorcontrib><creatorcontrib>Yan-Pei, Cao</creatorcontrib><creatorcontrib>Ding, Liang</creatorcontrib><creatorcontrib>Yu, Qiao</creatorcontrib><creatorcontrib>Dai, Bo</creatorcontrib><creatorcontrib>Lu, Sheng</creatorcontrib><collection>ProQuest SciTech Collection</collection><collection>ProQuest Technology Collection</collection><collection>Materials Science &amp; Engineering Collection</collection><collection>ProQuest Central (Alumni)</collection><collection>ProQuest Central</collection><collection>ProQuest Central Essentials</collection><collection>AUTh Library subscriptions: ProQuest Central</collection><collection>Technology Collection</collection><collection>ProQuest One Community College</collection><collection>ProQuest Central</collection><collection>SciTech Premium Collection</collection><collection>ProQuest Engineering Collection</collection><collection>Engineering Database</collection><collection>ProQuest Publicly Available Content database</collection><collection>ProQuest One Academic Eastern Edition (DO NOT USE)</collection><collection>ProQuest One Academic</collection><collection>ProQuest One Academic UKI Edition</collection><collection>ProQuest Central China</collection><collection>Engineering collection</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext</fulltext></delivery><addata><au>Huang, Zehuan</au><au>Wen, Hao</au><au>Dong, Junting</au><au>Wang, Yaohui</au><au>Li, Yangguang</au><au>Chen, Xinyuan</au><au>Yan-Pei, Cao</au><au>Ding, Liang</au><au>Yu, Qiao</au><au>Dai, Bo</au><au>Lu, Sheng</au><format>book</format><genre>document</genre><ristype>GEN</ristype><atitle>EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion</atitle><jtitle>arXiv.org</jtitle><date>2024-04-02</date><risdate>2024</risdate><eissn>2331-8422</eissn><abstract>Generating multiview images from a single view facilitates the rapid generation of a 3D mesh conditioned on a single image. Recent methods that introduce 3D global representation into diffusion models have shown the potential to generate consistent multiviews, but they have reduced generation speed and face challenges in maintaining generalizability and quality. To address this issue, we propose EpiDiff, a localized interactive multiview diffusion model. At the core of the proposed approach is to insert a lightweight epipolar attention block into the frozen diffusion model, leveraging epipolar constraints to enable cross-view interaction among feature maps of neighboring views. The newly initialized 3D modeling module preserves the original feature distribution of the diffusion model, exhibiting compatibility with a variety of base diffusion models. Experiments show that EpiDiff generates 16 multiview images in just 12 seconds, and it surpasses previous methods in quality evaluation metrics, including PSNR, SSIM and LPIPS. Additionally, EpiDiff can generate a more diverse distribution of views, improving the reconstruction quality from generated multiviews. Please see our project page at https://huanngzh.github.io/EpiDiff/.</abstract><cop>Ithaca</cop><pub>Cornell University Library, arXiv.org</pub><oa>free_for_read</oa></addata></record>
fulltext fulltext
identifier EISSN: 2331-8422
ispartof arXiv.org, 2024-04
issn 2331-8422
language eng
recordid cdi_proquest_journals_2901356357
source ProQuest Publicly Available Content database
subjects Constraint modelling
Diffusion barriers
Feature maps
Finite element method
Image enhancement
Image reconstruction
Quality assessment
Three dimensional models
title EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-24T20%3A23%3A43IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-proquest&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=document&rft.atitle=EpiDiff:%20Enhancing%20Multi-View%20Synthesis%20via%20Localized%20Epipolar-Constrained%20Diffusion&rft.jtitle=arXiv.org&rft.au=Huang,%20Zehuan&rft.date=2024-04-02&rft.eissn=2331-8422&rft_id=info:doi/&rft_dat=%3Cproquest%3E2901356357%3C/proquest%3E%3Cgrp_id%3Ecdi_FETCH-proquest_journals_29013563573%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_pqid=2901356357&rft_id=info:pmid/&rfr_iscdi=true