Loading…

Unsupervised sound localization via iterative contrastive learning

Sound localization aims to find the source of the audio signal in the visual scene. However, it is labor-intensive to annotate the correlations between the signals sampled from the audio and visual modalities, thus making it difficult to supervise the learning of a machine for this task. In this wor...

Full description

Saved in:
Bibliographic Details
Published in:Computer vision and image understanding 2023-01, Vol.227, p.103602, Article 103602
Main Authors: Lin, Yan-Bo, Tseng, Hung-Yu, Lee, Hsin-Ying, Lin, Yen-Yu, Yang, Ming-Hsuan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Sound localization aims to find the source of the audio signal in the visual scene. However, it is labor-intensive to annotate the correlations between the signals sampled from the audio and visual modalities, thus making it difficult to supervise the learning of a machine for this task. In this work, we propose an iterative contrastive learning framework that requires no data annotations. At each iteration, the proposed method takes the (1) localization results in images predicted in the previous iteration, and (2) semantic relationships inferred from the audio signals as the pseudo-labels. We then use the pseudo-labels to learn the correlation between the visual and audio signals sampled from the same video (intra-frame sampling) as well as the association between those extracted across videos (inter-frame relation). Our iterative strategy gradually encourages the localization of the sounding objects and reduces the correlation between the non-sounding regions and the reference audio. Quantitative and qualitative experimental results demonstrate that the proposed framework performs favorably against existing unsupervised and weakly-supervised methods on the sound localization task.
ISSN:1077-3142
1090-235X
DOI:10.1016/j.cviu.2022.103602