Loading…

Simulating analogue film damage to analyse and improve artefact restoration on high‐resolution scans

Digital scans of analogue photographic film typically contain artefacts such as dust and scratches. Automated removal of these is an important part of preservation and dissemination of photographs of historical and cultural importance. While state‐of‐the‐art deep learning models have shown impressiv...

Full description

Saved in:
Bibliographic Details
Published in:Computer graphics forum 2023-05, Vol.42 (2), p.133-148
Main Authors: Ivanova, D., Williamson, J., Henderson, P.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Digital scans of analogue photographic film typically contain artefacts such as dust and scratches. Automated removal of these is an important part of preservation and dissemination of photographs of historical and cultural importance. While state‐of‐the‐art deep learning models have shown impressive results in general image inpainting and denoising, film artefact removal is an understudied problem. It has particularly challenging requirements, due to the complex nature of analogue damage, the high resolution of film scans, and potential ambiguities in the restoration. There are no publicly available high‐quality datasets of real‐world analogue film damage for training and evaluation, making quantitative studies impossible. We address the lack of ground‐truth data for evaluation by collecting a dataset of 4K damaged analogue film scans paired with manually‐restored versions produced by a human expert, allowing quantitative evaluation of restoration performance. We have made the dataset available at https://doi.org/10.6084/m9.figshare.21803304. We construct a larger synthetic dataset of damaged images with paired clean versions using a statistical model of artefact shape and occurrence learnt from real, heavily‐damaged images. We carefully validate the realism of the simulated damage via a human perceptual study, showing that even expert users find our synthetic damage indistinguishable from real. In addition, we demonstrate that training with our synthetically damaged dataset leads to improved artefact segmentation performance when compared to previously proposed synthetic analogue damage overlays. The synthetically damaged dataset can be found at https://doi.org/10.6084/m9.figshare.21815844, and the annotated authentic artefacts along with the resulting statistical damage model at https://github.com/daniela997/FilmDamageSimulator. Finally, we use these datasets to train and analyse the performance of eight state‐of‐the‐art image restoration methods on high‐resolution scans. We compare both methods which directly perform the restoration task on scans with artefacts, and methods which require a damage mask to be provided for the inpainting of artefacts. We modify the methods to process the inputs in a patch‐wise fashion to operate on original high resolution film scans.
ISSN:0167-7055
1467-8659
DOI:10.1111/cgf.14749