Loading…

Object Detection and Segmentation Using Deeplabv3 Deep Neural Network for a Portable X-Ray Source Model

The primary purpose of this research is to implement Deeplabv3 architecture’s deep neural network in detecting and segmenting portable X-ray source model parts such as body, handle, and aperture in the same color scheme scenario. Similarly, the aperture is smaller with lower resolution making deep c...

Full description

Saved in:
Bibliographic Details
Published in:Journal of advanced computational intelligence and intelligent informatics 2022-09, Vol.26 (5), p.842-850
Main Authors: Rogelio, Jayson P., Dadios, Elmer P., Vicerra, Ryan Ray P., Bandala, Argel A.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The primary purpose of this research is to implement Deeplabv3 architecture’s deep neural network in detecting and segmenting portable X-ray source model parts such as body, handle, and aperture in the same color scheme scenario. Similarly, the aperture is smaller with lower resolution making deep convolutional neural networks more difficult to segment. As the input feature map diminishes as the net progresses, information about the aperture or the object on a smaller scale may be lost. It recommends using Deeplabv3 architecture to overcome this issue, as it is successful for semantic segmentation. Based on the experiment conducted, the average precision of the body, handle, and aperture of the portable X-ray source model are 91.75%, 20.41%, and 6.25%, respectively. Moreover, it indicates that detecting the “body” part has the highest average precision. In contrast, the detection of the “aperture” part has the lowest average precision. Likewise, the study found that using Deeplabv3 deep neural network architecture, detection, and segmentation of the portable X-ray source model was successful but needed improvement to increase the overall mean AP of 39.47%.
ISSN:1343-0130
1883-8014
DOI:10.20965/jaciii.2022.p0842