Loading…

Assembling Convolution Neural Networks for Automatic Viewing Transformation

Images taken under different camera poses are rotated or distorted, which leads to poor perception experiences. This article proposes a new framework to automatically transform the images to the conformable view setting by assembling different convolution neural networks. Specifically, a referential...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on industrial informatics 2020-01, Vol.16 (1), p.587-594
Main Authors: Cai, Haibin, Jiang, Lei, Liu, Bangli, Deng, Yiqi, Meng, Qinggang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Images taken under different camera poses are rotated or distorted, which leads to poor perception experiences. This article proposes a new framework to automatically transform the images to the conformable view setting by assembling different convolution neural networks. Specifically, a referential three-dimensional ground plane is first derived from the color image and a novel projection mapping algorithm is developed to achieve automatic viewing transformation. Extensive experimental results demonstrate that the proposed method outperforms the state-of-the-art vanishing points based methods by a large margin in terms of accuracy and robustness.
ISSN:1551-3203
1941-0050
DOI:10.1109/TII.2019.2940136