Loading…
SRLibrary: Comparing different loss functions for super-resolution over various convolutional architectures
•This study analyzes the effectiveness of various loss functions on performance improvement for Single Image Super-Resolution (SISR) using Convolutional Neural Network (CNN) models by surrogating the reconstructive map between Low Resolution (LR) and High Resolution (HR) images with convolutional fi...
Saved in:
Published in: | Journal of visual communication and image representation 2019-05, Vol.61, p.178-187 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •This study analyzes the effectiveness of various loss functions on performance improvement for Single Image Super-Resolution (SISR) using Convolutional Neural Network (CNN) models by surrogating the reconstructive map between Low Resolution (LR) and High Resolution (HR) images with convolutional filters.•Eight loss functions invidiually have been incorporated with popular CNN architectures.•Through a comprehensive empirical analysis, we have found that some loss functions generate more pleasing HR outputs and achieve high statistical accuracy rates.
This study analyzes the effectiveness of various loss functions on performance improvement for Single Image Super-Resolution (SISR) using Convolutional Neural Network (CNN) models by surrogating the reconstructive map between Low Resolution (LR) and High Resolution (HR) images with convolutional filters. In total, eight loss functions are separately incorporated with Adam optimizer. Through experimental evaluations on different datasets, it is observed that some parametric and non-parametric robust loss functions promise impressive accuracies whereas remaining ones are sensitive to noise that misleads the learning process and consequently resulting in lower quality HR outcomes. Eventually, it turns out that the use of either Difference of Structural Similarity (DSSIM), Charbonnier or L1 loss functions within the optimization mechanism would be a proper choice, by considering their excellent reconstruction results. Among them, Charbonnier and L1 loss functions are fastest ones when the computational time cost is examined during training stage. |
---|---|
ISSN: | 1047-3203 1095-9076 |
DOI: | 10.1016/j.jvcir.2019.03.027 |