Loading…

Assembling convolution neural networks for automatic viewing transformation

Images taken under different camera poses are rotated or distorted, which leads to poor perception experiences. This paper proposes a new framework to automatically transform the images to the conformable view setting by assembling different convolution neural networks. Specifically, a referential 3...

Full description

Saved in:
Bibliographic Details
Main Authors: Haibin Cai, Lei Jiang, Bangli Liu, Yiqi Deng, Qinggang Meng
Format: Default Article
Published: 2019
Subjects:
Online Access:https://hdl.handle.net/2134/9912980.v1
Tags: Add Tag
No Tags, Be the first to tag this record!
_version_ 1818168067204579328
author Haibin Cai
Lei Jiang
Bangli Liu
Yiqi Deng
Qinggang Meng
author_facet Haibin Cai
Lei Jiang
Bangli Liu
Yiqi Deng
Qinggang Meng
author_sort Haibin Cai (794409)
collection Figshare
description Images taken under different camera poses are rotated or distorted, which leads to poor perception experiences. This paper proposes a new framework to automatically transform the images to the conformable view setting by assembling different convolution neural networks. Specifically, a referential 3D ground plane is firstly derived from the RGB image and a novel projection mapping algorithm is developed to achieve automatic viewing transformation. Extensive experimental results demonstrate that the proposed method outperforms the state-ofthe-art vanishing points based methods by a large margin in terms of accuracy and robustness.
format Default
Article
id rr-article-9912980
institution Loughborough University
publishDate 2019
record_format Figshare
spelling rr-article-99129802019-09-09T00:00:00Z Assembling convolution neural networks for automatic viewing transformation Haibin Cai (794409) Lei Jiang (73366) Bangli Liu (6178787) Yiqi Deng (6004352) Qinggang Meng (1257072) Electrical & Electronic Engineering Information and Computing Sciences Engineering Technology Automatic viewing transform Convolution neural networks Deep learning Images taken under different camera poses are rotated or distorted, which leads to poor perception experiences. This paper proposes a new framework to automatically transform the images to the conformable view setting by assembling different convolution neural networks. Specifically, a referential 3D ground plane is firstly derived from the RGB image and a novel projection mapping algorithm is developed to achieve automatic viewing transformation. Extensive experimental results demonstrate that the proposed method outperforms the state-ofthe-art vanishing points based methods by a large margin in terms of accuracy and robustness. 2019-09-09T00:00:00Z Text Journal contribution 2134/9912980.v1 https://figshare.com/articles/journal_contribution/Assembling_convolution_neural_networks_for_automatic_viewing_transformation/9912980 All Rights Reserved
spellingShingle Electrical & Electronic Engineering
Information and Computing Sciences
Engineering
Technology
Automatic viewing transform
Convolution neural networks
Deep learning
Haibin Cai
Lei Jiang
Bangli Liu
Yiqi Deng
Qinggang Meng
Assembling convolution neural networks for automatic viewing transformation
title Assembling convolution neural networks for automatic viewing transformation
title_full Assembling convolution neural networks for automatic viewing transformation
title_fullStr Assembling convolution neural networks for automatic viewing transformation
title_full_unstemmed Assembling convolution neural networks for automatic viewing transformation
title_short Assembling convolution neural networks for automatic viewing transformation
title_sort assembling convolution neural networks for automatic viewing transformation
topic Electrical & Electronic Engineering
Information and Computing Sciences
Engineering
Technology
Automatic viewing transform
Convolution neural networks
Deep learning
url https://hdl.handle.net/2134/9912980.v1