Loading…

Fracture Identification in Well Logging Images: Two-Stage Adaptive Network

Automatic fracture identification and segmentation in well logging images is increasingly critical but arduous because of extensive exploration for oil and gas. Domain adaptation for semantic segmentation is an appealing alternative to the lack of semantic fracture annotations. Nevertheless, previou...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on instrumentation and measurement 2022, Vol.71, p.1-12
Main Authors: Zhang, Wei, Li, Zhipeng, Wu, Tong, Yao, Zhenqiu, Qiu, Ao, Li, Yanjun, Shi, Yibing
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Automatic fracture identification and segmentation in well logging images is increasingly critical but arduous because of extensive exploration for oil and gas. Domain adaptation for semantic segmentation is an appealing alternative to the lack of semantic fracture annotations. Nevertheless, previous domain adaptation strategies merely focus on a single level (e.g., input, intermediate features, or output), which could result in a large generalization error on new well logging images. In this article, a two-stage network architecture is proposed. In stage 1, a pattern transfer network (PTN) model is utilized to carry out a transformation from one domain to the other domain, making images visually similar between two datasets. Sequentially, in stage 2, an adversarial learning model with a generator and discriminator is introduced to make two domain images generated via PTN producing similar semantic segmentation. Furthermore, an attention module was embedded in the generator to filter irrelevant noise in semantic segmentation results. Real logging images were collected as the tested datasets by an ultrasonic imaging logging instrument. Compared to earlier conventional algorithms and domain adaptation methods, the proposed method is shown to obtain better visual quality and accurate segmentation outcomes.
ISSN:0018-9456
1557-9662
DOI:10.1109/TIM.2021.3130671