Loading…

Advanced texture and depth coding in 3D-HEVC

The 3D extension of High Efficiency Video Coding (3D-HEVC) is a new international video coding standard developed by the Joint Collaborative Team on 3D Video Coding Extensions (JCT-3V) in order to support coding of multiple views and its associated depth data. 3D-HEVC aims at improving the coding ef...

Full description

Saved in:
Bibliographic Details
Published in:Journal of visual communication and image representation 2018-01, Vol.50, p.83-92
Main Authors: Lin, Jian-Liang, Chen, Yi-Wen, Chang, Yu-Lin, An, Jicheng, Zhang, Kai, Huang, Yu-Wen, Lei, Shawmin
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The 3D extension of High Efficiency Video Coding (3D-HEVC) is a new international video coding standard developed by the Joint Collaborative Team on 3D Video Coding Extensions (JCT-3V) in order to support coding of multiple views and its associated depth data. 3D-HEVC aims at improving the coding efficiency of 3D and multi-view videos by introducing new coding tools to utilize the correlations between views and between texture and depth components. In this paper, an inter-view motion prediction (inter-view merge candidate) and an inter-component motion prediction (texture merge candidate) are proposed to explore the inter-view and the inter-component redundancies for texture and depth components, respectively. Moreover, a new coding mode termed as single depth mode which simply reconstructs a coding block with a single depth value based on block merging scheme under the HEVC quad-tree based block partitioning is also introduced. All the proposed schemes are adopted in 3D-HEVC. The experimental results evaluated under the common test conditions (CTC) for developing 3D-HEVC show that the proposed inter-view merge candidate, texture merge candidate, and single depth mode achieve significant BD-rate reductions of 19.5% for dependent texture views and 8.3% for the synthesized texture views.
ISSN:1047-3203
1095-9076
DOI:10.1016/j.jvcir.2017.11.003