Loading…
Linearizing the Plenoptic Space
The plenoptic function, also known as the light field or the lumigraph, contains the information about the radiance of all optical rays that go through all points in space in a scene. Since no camera can capture all this information, one of the main challenges in plenoptic imaging is light field rec...
Saved in:
Main Authors: | , , |
---|---|
Format: | Conference Proceeding |
Language: | English |
Subjects: | |
Online Access: | Request full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
cited_by | |
---|---|
cites | |
container_end_page | 1725 |
container_issue | |
container_start_page | 1714 |
container_title | |
container_volume | |
creator | Nieto, Gregoire Devernay, Frederic Crowley, James |
description | The plenoptic function, also known as the light field or the lumigraph, contains the information about the radiance of all optical rays that go through all points in space in a scene. Since no camera can capture all this information, one of the main challenges in plenoptic imaging is light field reconstruction, which consists in interpolating the ray samples captured by the cameras to create a dense light field. Most existing methods perform this task by first attempting some kind of 3D reconstruction of the visible scene. Our method, in contrast, works by modeling the scene as a set of visual points, which describe how each point moves in the image when a camera moves. We compute visual point models of various degrees of complexity, and show that high-dimensional models are able to replicate complex optical effects such as reflection or refraction, and a model selection method can differentiate quasi-Lambertian from non-Lambertian areas in the scene. |
doi_str_mv | 10.1109/CVPRW.2017.218 |
format | conference_proceeding |
fullrecord | <record><control><sourceid>ieee_CHZPO</sourceid><recordid>TN_cdi_ieee_primary_8014952</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>8014952</ieee_id><sourcerecordid>8014952</sourcerecordid><originalsourceid>FETCH-LOGICAL-h249t-b10d4c2f856103124954b94aca1d12aa7544bb8eebfaf886acc7f46a355febc23</originalsourceid><addsrcrecordid>eNotzMtKw0AUgOFREKy1WzcuzAsknjP3LEvwBgGL12U5Mz1jR2oMSTb69C3o6od_8QlxgVAhQn3dvK2e3isJ6CqJ_kicoVHeglPKHouZRAulM2hPxWIcPwEAwRtTq5m4anPHNOTf3H0U05aL1Y67737KsXjuKfK5OEm0G3nx37l4vb15ae7L9vHuoVm25VbqeioDwkZHmbyxCAoPz-hQa4qEG5REzmgdgmcOiZL3lmJ0SVtSxiQOUaq5uPxzMzOv-yF_0fCz9oAHSao9-YI9Og</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Linearizing the Plenoptic Space</title><source>IEEE Xplore All Conference Series</source><creator>Nieto, Gregoire ; Devernay, Frederic ; Crowley, James</creator><creatorcontrib>Nieto, Gregoire ; Devernay, Frederic ; Crowley, James</creatorcontrib><description>The plenoptic function, also known as the light field or the lumigraph, contains the information about the radiance of all optical rays that go through all points in space in a scene. Since no camera can capture all this information, one of the main challenges in plenoptic imaging is light field reconstruction, which consists in interpolating the ray samples captured by the cameras to create a dense light field. Most existing methods perform this task by first attempting some kind of 3D reconstruction of the visible scene. Our method, in contrast, works by modeling the scene as a set of visual points, which describe how each point moves in the image when a camera moves. We compute visual point models of various degrees of complexity, and show that high-dimensional models are able to replicate complex optical effects such as reflection or refraction, and a model selection method can differentiate quasi-Lambertian from non-Lambertian areas in the scene.</description><identifier>EISSN: 2160-7516</identifier><identifier>EISBN: 1538607336</identifier><identifier>EISBN: 9781538607336</identifier><identifier>DOI: 10.1109/CVPRW.2017.218</identifier><identifier>CODEN: IEEPAD</identifier><language>eng</language><publisher>IEEE</publisher><subject>Cameras ; Image reconstruction ; Optical imaging ; Optical refraction ; Optical variables control ; Three-dimensional displays ; Visualization</subject><ispartof>2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, p.1714-1725</ispartof><oa>free_for_read</oa><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/8014952$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,778,782,787,788,23913,23914,25123,27908,54538,54915</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/8014952$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Nieto, Gregoire</creatorcontrib><creatorcontrib>Devernay, Frederic</creatorcontrib><creatorcontrib>Crowley, James</creatorcontrib><title>Linearizing the Plenoptic Space</title><title>2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</title><addtitle>CVPRW</addtitle><description>The plenoptic function, also known as the light field or the lumigraph, contains the information about the radiance of all optical rays that go through all points in space in a scene. Since no camera can capture all this information, one of the main challenges in plenoptic imaging is light field reconstruction, which consists in interpolating the ray samples captured by the cameras to create a dense light field. Most existing methods perform this task by first attempting some kind of 3D reconstruction of the visible scene. Our method, in contrast, works by modeling the scene as a set of visual points, which describe how each point moves in the image when a camera moves. We compute visual point models of various degrees of complexity, and show that high-dimensional models are able to replicate complex optical effects such as reflection or refraction, and a model selection method can differentiate quasi-Lambertian from non-Lambertian areas in the scene.</description><subject>Cameras</subject><subject>Image reconstruction</subject><subject>Optical imaging</subject><subject>Optical refraction</subject><subject>Optical variables control</subject><subject>Three-dimensional displays</subject><subject>Visualization</subject><issn>2160-7516</issn><isbn>1538607336</isbn><isbn>9781538607336</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2017</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotzMtKw0AUgOFREKy1WzcuzAsknjP3LEvwBgGL12U5Mz1jR2oMSTb69C3o6od_8QlxgVAhQn3dvK2e3isJ6CqJ_kicoVHeglPKHouZRAulM2hPxWIcPwEAwRtTq5m4anPHNOTf3H0U05aL1Y67737KsXjuKfK5OEm0G3nx37l4vb15ae7L9vHuoVm25VbqeioDwkZHmbyxCAoPz-hQa4qEG5REzmgdgmcOiZL3lmJ0SVtSxiQOUaq5uPxzMzOv-yF_0fCz9oAHSao9-YI9Og</recordid><startdate>201707</startdate><enddate>201707</enddate><creator>Nieto, Gregoire</creator><creator>Devernay, Frederic</creator><creator>Crowley, James</creator><general>IEEE</general><scope>6IE</scope><scope>6IL</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIL</scope></search><sort><creationdate>201707</creationdate><title>Linearizing the Plenoptic Space</title><author>Nieto, Gregoire ; Devernay, Frederic ; Crowley, James</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-h249t-b10d4c2f856103124954b94aca1d12aa7544bb8eebfaf886acc7f46a355febc23</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2017</creationdate><topic>Cameras</topic><topic>Image reconstruction</topic><topic>Optical imaging</topic><topic>Optical refraction</topic><topic>Optical variables control</topic><topic>Three-dimensional displays</topic><topic>Visualization</topic><toplevel>online_resources</toplevel><creatorcontrib>Nieto, Gregoire</creatorcontrib><creatorcontrib>Devernay, Frederic</creatorcontrib><creatorcontrib>Crowley, James</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan All Online (POP All Online) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE</collection><collection>IEEE Proceedings Order Plans (POP All) 1998-Present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Nieto, Gregoire</au><au>Devernay, Frederic</au><au>Crowley, James</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Linearizing the Plenoptic Space</atitle><btitle>2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW)</btitle><stitle>CVPRW</stitle><date>2017-07</date><risdate>2017</risdate><spage>1714</spage><epage>1725</epage><pages>1714-1725</pages><eissn>2160-7516</eissn><eisbn>1538607336</eisbn><eisbn>9781538607336</eisbn><coden>IEEPAD</coden><abstract>The plenoptic function, also known as the light field or the lumigraph, contains the information about the radiance of all optical rays that go through all points in space in a scene. Since no camera can capture all this information, one of the main challenges in plenoptic imaging is light field reconstruction, which consists in interpolating the ray samples captured by the cameras to create a dense light field. Most existing methods perform this task by first attempting some kind of 3D reconstruction of the visible scene. Our method, in contrast, works by modeling the scene as a set of visual points, which describe how each point moves in the image when a camera moves. We compute visual point models of various degrees of complexity, and show that high-dimensional models are able to replicate complex optical effects such as reflection or refraction, and a model selection method can differentiate quasi-Lambertian from non-Lambertian areas in the scene.</abstract><pub>IEEE</pub><doi>10.1109/CVPRW.2017.218</doi><tpages>12</tpages><oa>free_for_read</oa></addata></record> |
fulltext | fulltext_linktorsrc |
identifier | EISSN: 2160-7516 |
ispartof | 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 2017, p.1714-1725 |
issn | 2160-7516 |
language | eng |
recordid | cdi_ieee_primary_8014952 |
source | IEEE Xplore All Conference Series |
subjects | Cameras Image reconstruction Optical imaging Optical refraction Optical variables control Three-dimensional displays Visualization |
title | Linearizing the Plenoptic Space |
url | http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2025-01-17T00%3A47%3A57IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_CHZPO&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Linearizing%20the%20Plenoptic%20Space&rft.btitle=2017%20IEEE%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition%20Workshops%20(CVPRW)&rft.au=Nieto,%20Gregoire&rft.date=2017-07&rft.spage=1714&rft.epage=1725&rft.pages=1714-1725&rft.eissn=2160-7516&rft.coden=IEEPAD&rft_id=info:doi/10.1109/CVPRW.2017.218&rft.eisbn=1538607336&rft.eisbn_list=9781538607336&rft_dat=%3Cieee_CHZPO%3E8014952%3C/ieee_CHZPO%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-h249t-b10d4c2f856103124954b94aca1d12aa7544bb8eebfaf886acc7f46a355febc23%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=8014952&rfr_iscdi=true |