Loading…

Arbitrary view position and direction rendering for large-scale scenes

This paper presents a new method for rendering views, especially those of large-scale scenes, such as broad city landscapes. The main contribution of our method is that we are able to easily render any view from an arbitrary point to an arbitrary direction on the ground in a virtual environment. Our...

Full description

Saved in:
Bibliographic Details
Main Authors: Takahashi, T., Kawasaki, H., Ikeuchi, K., Sakauchi, M.
Format: Conference Proceeding
Language:English
Subjects:
Online Access:Request full text
Tags: Add Tag
No Tags, Be the first to tag this record!
cited_by
cites
container_end_page 303 vol.2
container_issue
container_start_page 296
container_title
container_volume 2
creator Takahashi, T.
Kawasaki, H.
Ikeuchi, K.
Sakauchi, M.
description This paper presents a new method for rendering views, especially those of large-scale scenes, such as broad city landscapes. The main contribution of our method is that we are able to easily render any view from an arbitrary point to an arbitrary direction on the ground in a virtual environment. Our method belongs to the family of work that employs plenoptic functions; however, unlike other works of this type, this particular method allows us to render a novel view from almost any point on the plane at which images are taken. Previous methods, on the other hand, have some restraints concerning their re-constructable area. Thus, when synthesizing a large-scale virtual environment such as a city, our method has a great advantage. One of the applications of our method is a driving simulator in the ITS domain. We can generate any view on any lane on the road from images taken by running along just one lane. Our method, using an omni-directional camera or a measuring device of a similar type, first captures panoramic images by running along a straight line, recording the capturing position of each image. When rendering, the method divides the stored panoramic images into vertical slits, selects some suitable ones based on our theory, and reassembles them for generating an image. The method can make a virtual city with walk-through capabilities. In that virtual city, people can move and look rather freely. In this paper, we describe the basic theory of a new plenoptic function, analyze the applicable areas of the theory and the characteristics of generated images, and demonstrate a complete working system using both indoor and outdoor scenes.
doi_str_mv 10.1109/CVPR.2000.854815
format conference_proceeding
fullrecord <record><control><sourceid>ieee_6IE</sourceid><recordid>TN_cdi_ieee_primary_854815</recordid><sourceformat>XML</sourceformat><sourcesystem>PC</sourcesystem><ieee_id>854815</ieee_id><sourcerecordid>854815</sourcerecordid><originalsourceid>FETCH-LOGICAL-i203t-3a16b00b0a6cb68e8482d1032d9da2f11e36ed7672d997f00c276d23d474ce553</originalsourceid><addsrcrecordid>eNotT0tLxDAYDKjgsvYunvIHUr8kzeu4FFeFBUXU65ImX5dIbZekKP57i-tpmAfDDCHXHGrOwd22788vtQCA2qrGcnVGKmcsGO0UaC3kOVlx0JJpx90lqUr5WLKweE6ZFdlucpfm7PMP_Ur4TY9TSXOaRurHSGPKGP5YxjFiTuOB9lOmg88HZCX4AWkJOGK5Ihe9HwpW_7gmb9u71_aB7Z7uH9vNjiUBcmbSc90BdOB16LRF21gROUgRXfSi5xylxmi0WQRneoAgjI5CxsY0AZWSa3Jz6k2IuD_m9Lks35-Oy19Yj0v1</addsrcrecordid><sourcetype>Publisher</sourcetype><iscdi>true</iscdi><recordtype>conference_proceeding</recordtype></control><display><type>conference_proceeding</type><title>Arbitrary view position and direction rendering for large-scale scenes</title><source>IEEE Electronic Library (IEL) Conference Proceedings</source><creator>Takahashi, T. ; Kawasaki, H. ; Ikeuchi, K. ; Sakauchi, M.</creator><creatorcontrib>Takahashi, T. ; Kawasaki, H. ; Ikeuchi, K. ; Sakauchi, M.</creatorcontrib><description>This paper presents a new method for rendering views, especially those of large-scale scenes, such as broad city landscapes. The main contribution of our method is that we are able to easily render any view from an arbitrary point to an arbitrary direction on the ground in a virtual environment. Our method belongs to the family of work that employs plenoptic functions; however, unlike other works of this type, this particular method allows us to render a novel view from almost any point on the plane at which images are taken. Previous methods, on the other hand, have some restraints concerning their re-constructable area. Thus, when synthesizing a large-scale virtual environment such as a city, our method has a great advantage. One of the applications of our method is a driving simulator in the ITS domain. We can generate any view on any lane on the road from images taken by running along just one lane. Our method, using an omni-directional camera or a measuring device of a similar type, first captures panoramic images by running along a straight line, recording the capturing position of each image. When rendering, the method divides the stored panoramic images into vertical slits, selects some suitable ones based on our theory, and reassembles them for generating an image. The method can make a virtual city with walk-through capabilities. In that virtual city, people can move and look rather freely. In this paper, we describe the basic theory of a new plenoptic function, analyze the applicable areas of the theory and the characteristics of generated images, and demonstrate a complete working system using both indoor and outdoor scenes.</description><identifier>ISSN: 1063-6919</identifier><identifier>ISBN: 9780769506623</identifier><identifier>ISBN: 0769506623</identifier><identifier>DOI: 10.1109/CVPR.2000.854815</identifier><language>eng</language><publisher>IEEE</publisher><subject>Cameras ; Cities and towns ; Image analysis ; Image generation ; Large-scale systems ; Layout ; Position measurement ; Rendering (computer graphics) ; Roads ; Virtual environment</subject><ispartof>Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662), 2000, Vol.2, p.296-303 vol.2</ispartof><woscitedreferencessubscribed>false</woscitedreferencessubscribed></display><links><openurl>$$Topenurl_article</openurl><openurlfulltext>$$Topenurlfull_article</openurlfulltext><thumbnail>$$Tsyndetics_thumb_exl</thumbnail><linktohtml>$$Uhttps://ieeexplore.ieee.org/document/854815$$EHTML$$P50$$Gieee$$H</linktohtml><link.rule.ids>309,310,780,784,789,790,2058,4050,4051,27925,54920</link.rule.ids><linktorsrc>$$Uhttps://ieeexplore.ieee.org/document/854815$$EView_record_in_IEEE$$FView_record_in_$$GIEEE</linktorsrc></links><search><creatorcontrib>Takahashi, T.</creatorcontrib><creatorcontrib>Kawasaki, H.</creatorcontrib><creatorcontrib>Ikeuchi, K.</creatorcontrib><creatorcontrib>Sakauchi, M.</creatorcontrib><title>Arbitrary view position and direction rendering for large-scale scenes</title><title>Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662)</title><addtitle>CVPR</addtitle><description>This paper presents a new method for rendering views, especially those of large-scale scenes, such as broad city landscapes. The main contribution of our method is that we are able to easily render any view from an arbitrary point to an arbitrary direction on the ground in a virtual environment. Our method belongs to the family of work that employs plenoptic functions; however, unlike other works of this type, this particular method allows us to render a novel view from almost any point on the plane at which images are taken. Previous methods, on the other hand, have some restraints concerning their re-constructable area. Thus, when synthesizing a large-scale virtual environment such as a city, our method has a great advantage. One of the applications of our method is a driving simulator in the ITS domain. We can generate any view on any lane on the road from images taken by running along just one lane. Our method, using an omni-directional camera or a measuring device of a similar type, first captures panoramic images by running along a straight line, recording the capturing position of each image. When rendering, the method divides the stored panoramic images into vertical slits, selects some suitable ones based on our theory, and reassembles them for generating an image. The method can make a virtual city with walk-through capabilities. In that virtual city, people can move and look rather freely. In this paper, we describe the basic theory of a new plenoptic function, analyze the applicable areas of the theory and the characteristics of generated images, and demonstrate a complete working system using both indoor and outdoor scenes.</description><subject>Cameras</subject><subject>Cities and towns</subject><subject>Image analysis</subject><subject>Image generation</subject><subject>Large-scale systems</subject><subject>Layout</subject><subject>Position measurement</subject><subject>Rendering (computer graphics)</subject><subject>Roads</subject><subject>Virtual environment</subject><issn>1063-6919</issn><isbn>9780769506623</isbn><isbn>0769506623</isbn><fulltext>true</fulltext><rsrctype>conference_proceeding</rsrctype><creationdate>2000</creationdate><recordtype>conference_proceeding</recordtype><sourceid>6IE</sourceid><recordid>eNotT0tLxDAYDKjgsvYunvIHUr8kzeu4FFeFBUXU65ImX5dIbZekKP57i-tpmAfDDCHXHGrOwd22788vtQCA2qrGcnVGKmcsGO0UaC3kOVlx0JJpx90lqUr5WLKweE6ZFdlucpfm7PMP_Ur4TY9TSXOaRurHSGPKGP5YxjFiTuOB9lOmg88HZCX4AWkJOGK5Ihe9HwpW_7gmb9u71_aB7Z7uH9vNjiUBcmbSc90BdOB16LRF21gROUgRXfSi5xylxmi0WQRneoAgjI5CxsY0AZWSa3Jz6k2IuD_m9Lks35-Oy19Yj0v1</recordid><startdate>2000</startdate><enddate>2000</enddate><creator>Takahashi, T.</creator><creator>Kawasaki, H.</creator><creator>Ikeuchi, K.</creator><creator>Sakauchi, M.</creator><general>IEEE</general><scope>6IE</scope><scope>6IH</scope><scope>CBEJK</scope><scope>RIE</scope><scope>RIO</scope></search><sort><creationdate>2000</creationdate><title>Arbitrary view position and direction rendering for large-scale scenes</title><author>Takahashi, T. ; Kawasaki, H. ; Ikeuchi, K. ; Sakauchi, M.</author></sort><facets><frbrtype>5</frbrtype><frbrgroupid>cdi_FETCH-LOGICAL-i203t-3a16b00b0a6cb68e8482d1032d9da2f11e36ed7672d997f00c276d23d474ce553</frbrgroupid><rsrctype>conference_proceedings</rsrctype><prefilter>conference_proceedings</prefilter><language>eng</language><creationdate>2000</creationdate><topic>Cameras</topic><topic>Cities and towns</topic><topic>Image analysis</topic><topic>Image generation</topic><topic>Large-scale systems</topic><topic>Layout</topic><topic>Position measurement</topic><topic>Rendering (computer graphics)</topic><topic>Roads</topic><topic>Virtual environment</topic><toplevel>online_resources</toplevel><creatorcontrib>Takahashi, T.</creatorcontrib><creatorcontrib>Kawasaki, H.</creatorcontrib><creatorcontrib>Ikeuchi, K.</creatorcontrib><creatorcontrib>Sakauchi, M.</creatorcontrib><collection>IEEE Electronic Library (IEL) Conference Proceedings</collection><collection>IEEE Proceedings Order Plan (POP) 1998-present by volume</collection><collection>IEEE Xplore All Conference Proceedings</collection><collection>IEEE Xplore (Online service)</collection><collection>IEEE Proceedings Order Plans (POP) 1998-present</collection></facets><delivery><delcategory>Remote Search Resource</delcategory><fulltext>fulltext_linktorsrc</fulltext></delivery><addata><au>Takahashi, T.</au><au>Kawasaki, H.</au><au>Ikeuchi, K.</au><au>Sakauchi, M.</au><format>book</format><genre>proceeding</genre><ristype>CONF</ristype><atitle>Arbitrary view position and direction rendering for large-scale scenes</atitle><btitle>Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662)</btitle><stitle>CVPR</stitle><date>2000</date><risdate>2000</risdate><volume>2</volume><spage>296</spage><epage>303 vol.2</epage><pages>296-303 vol.2</pages><issn>1063-6919</issn><isbn>9780769506623</isbn><isbn>0769506623</isbn><abstract>This paper presents a new method for rendering views, especially those of large-scale scenes, such as broad city landscapes. The main contribution of our method is that we are able to easily render any view from an arbitrary point to an arbitrary direction on the ground in a virtual environment. Our method belongs to the family of work that employs plenoptic functions; however, unlike other works of this type, this particular method allows us to render a novel view from almost any point on the plane at which images are taken. Previous methods, on the other hand, have some restraints concerning their re-constructable area. Thus, when synthesizing a large-scale virtual environment such as a city, our method has a great advantage. One of the applications of our method is a driving simulator in the ITS domain. We can generate any view on any lane on the road from images taken by running along just one lane. Our method, using an omni-directional camera or a measuring device of a similar type, first captures panoramic images by running along a straight line, recording the capturing position of each image. When rendering, the method divides the stored panoramic images into vertical slits, selects some suitable ones based on our theory, and reassembles them for generating an image. The method can make a virtual city with walk-through capabilities. In that virtual city, people can move and look rather freely. In this paper, we describe the basic theory of a new plenoptic function, analyze the applicable areas of the theory and the characteristics of generated images, and demonstrate a complete working system using both indoor and outdoor scenes.</abstract><pub>IEEE</pub><doi>10.1109/CVPR.2000.854815</doi></addata></record>
fulltext fulltext_linktorsrc
identifier ISSN: 1063-6919
ispartof Proceedings IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No.PR00662), 2000, Vol.2, p.296-303 vol.2
issn 1063-6919
language eng
recordid cdi_ieee_primary_854815
source IEEE Electronic Library (IEL) Conference Proceedings
subjects Cameras
Cities and towns
Image analysis
Image generation
Large-scale systems
Layout
Position measurement
Rendering (computer graphics)
Roads
Virtual environment
title Arbitrary view position and direction rendering for large-scale scenes
url http://sfxeu10.hosted.exlibrisgroup.com/loughborough?ctx_ver=Z39.88-2004&ctx_enc=info:ofi/enc:UTF-8&ctx_tim=2024-12-28T20%3A32%3A33IST&url_ver=Z39.88-2004&url_ctx_fmt=infofi/fmt:kev:mtx:ctx&rfr_id=info:sid/primo.exlibrisgroup.com:primo3-Article-ieee_6IE&rft_val_fmt=info:ofi/fmt:kev:mtx:book&rft.genre=proceeding&rft.atitle=Arbitrary%20view%20position%20and%20direction%20rendering%20for%20large-scale%20scenes&rft.btitle=Proceedings%20IEEE%20Conference%20on%20Computer%20Vision%20and%20Pattern%20Recognition.%20CVPR%202000%20(Cat.%20No.PR00662)&rft.au=Takahashi,%20T.&rft.date=2000&rft.volume=2&rft.spage=296&rft.epage=303%20vol.2&rft.pages=296-303%20vol.2&rft.issn=1063-6919&rft.isbn=9780769506623&rft.isbn_list=0769506623&rft_id=info:doi/10.1109/CVPR.2000.854815&rft_dat=%3Cieee_6IE%3E854815%3C/ieee_6IE%3E%3Cgrp_id%3Ecdi_FETCH-LOGICAL-i203t-3a16b00b0a6cb68e8482d1032d9da2f11e36ed7672d997f00c276d23d474ce553%3C/grp_id%3E%3Coa%3E%3C/oa%3E%3Curl%3E%3C/url%3E&rft_id=info:oai/&rft_id=info:pmid/&rft_ieee_id=854815&rfr_iscdi=true