Loading…

Real-time self-supervised achromatic face colorization

Recent deep learning-based 2D face image colorization techniques demonstrate significant improvement in colorization accuracy and detail preservation. However, the generation of a 3D counterpart is beyond the scope of these methods despite having extensive applications. Moreover, these approaches re...

Full description

Saved in:
Bibliographic Details
Published in:The Visual computer 2023-12, Vol.39 (12), p.6521-6536
Main Authors: Tiwari, Hitika, Subramanian, Venkatesh K., Chen, Yong-Sheng
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Recent deep learning-based 2D face image colorization techniques demonstrate significant improvement in colorization accuracy and detail preservation. However, the generation of a 3D counterpart is beyond the scope of these methods despite having extensive applications. Moreover, these approaches require a significant amount of inference time, thus posing a challenge for real-time applications. Besides, monocular 3D face reconstruction methods produce skin color consistent with the achromatic 2D face resulting in gray-scale 3D face texture. Therefore, we propose a novel real-time Self-Supervised CO operative CO loriza T ion of A chromatic Faces  ( COCOTA ) framework, which estimates colored 3D faces from both monocular color and achromatic face images without posing additional dependencies. The proposed network contains (1) Chromatic Pipeline to obtain 3D face alignment and geometric details for color face images and (2) Achromatic Pipeline for recovering texture from achromatic images. The proposed dual pipeline feature loss and parameter sharing technique aid in cooperation between COCOTA pipelines for facilitating knowledge transfer between them. We compare color accuracy of our method with several 3D face reconstruction approaches on the challenging CelebA-test and FairFace datasets. COCOTA outperforms the current state-of-the-art method by a large margin (e.g., an improvement of 25.3 % , 39.6 % , and 17 % is obtained on perceptual error, 3D color-based error, and 2D pixel-level error metrics, respectively). Also, we show the improvement in the proposed method’s inference time compared to 2D image colorization techniques, demonstrating the effectiveness of the proposed method.
ISSN:0178-2789
1432-2315
DOI:10.1007/s00371-022-02746-1