Loading…
CoLaNet: Adaptive Context and Latent Information Blending for Face Image Inpainting
Face inpainting, the task of filling up missing regions in a face image plausibly, has witnessed great advances with deep learning-based approaches. To fill in the missing region, existing methods either use information from the surrounding visible region of the input image itself (i.e., context) or...
Saved in:
Published in: | IEEE signal processing letters 2024, Vol.31, p.91-95 |
---|---|
Main Authors: | , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Face inpainting, the task of filling up missing regions in a face image plausibly, has witnessed great advances with deep learning-based approaches. To fill in the missing region, existing methods either use information from the surrounding visible region of the input image itself (i.e., context) or use prior knowledge obtained from the training data (i.e., latent). However, we find that exclusive usage of the two types of information is sub-optimal; whether the context-based approach is effective or the latent-based approach is effective is different for each missing region. To this end, we propose CoLaNet, a novel framework that adaptively blends context and latent information to inpaint face images. Specifically, the two types of information are balanced based on the attention between the missing region and the rest of the image. The regions strongly correlated to the visible region leverage context information more. Consequently, the adaptive utilization of context and latent information leads to better inpainting performance in various face images. |
---|---|
ISSN: | 1070-9908 1558-2361 |
DOI: | 10.1109/LSP.2023.3340998 |