Loading…
An Unsupervised Deep Learning Method for Denoising Prestack Random Noise
Deep-learning-based methods have been successfully applied to seismic data random noise attenuation. Among them, the supervised deep-learning-based methods dominate the unsupervised ones. The supervised methods need accurate noise-free data as training labels. However, the field seismic data cannot...
Saved in:
Published in: | IEEE geoscience and remote sensing letters 2022, Vol.19, p.1-5 |
---|---|
Main Authors: | , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Deep-learning-based methods have been successfully applied to seismic data random noise attenuation. Among them, the supervised deep-learning-based methods dominate the unsupervised ones. The supervised methods need accurate noise-free data as training labels. However, the field seismic data cannot meet this requirement. To circumvent it, some researchers utilized realistic-looking synthetic data or denoised results via conventional methods as labels. The former ones encounter the problem of weak generalization ability because it requires the same distribution of test and training data. The latter ones encounter the issue of insufficient denoising ability because its denoising ability is difficult to significantly exceed the conventional methods which were used to generate labels. To avoid preparing noise-free labels, we propose a novel deep learning framework for attenuating random noise of prestack seismic data in an unsupervised manner. The prestack seismic data, such as common-reflection-point (CRP) gathers and common-midpoint (CMP) gathers after normal moveout (NMO) correction, have high self-similarity. It is because their events are coherent in the time-space domain and approximately horizontal from shallow to deep layers. The generator convolutional neural network (GCN) first learns self-similar features before any learning. The useful signals are more self-similar than random noise, which is incoherent and randomly distributed. Therefore, the GCN extracts features of useful signals before random noise. We select the specific training iteration and adopt the early stopping strategy to suppress random noise. Both synthetic and field prestack seismic data examples demonstrate the validity of our methods. |
---|---|
ISSN: | 1545-598X 1558-0571 |
DOI: | 10.1109/LGRS.2020.3019400 |