Loading…

Chest X-Ray Outlier Detection Model Using Dimension Reduction and Edge Detection

With the advancement of Artificial Intelligence technology, the development of various applied software and studies are actively conducted on detection, classification, and prediction through interdisciplinary convergence and integration. Among them, medical AI has been drawing huge interest and pop...

Full description

Saved in:
Bibliographic Details
Published in:IEEE access 2021, Vol.9, p.86096-86106
Main Authors: Kim, Chang-Min, Hong, Ellen J., Park, Roy C.
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:With the advancement of Artificial Intelligence technology, the development of various applied software and studies are actively conducted on detection, classification, and prediction through interdisciplinary convergence and integration. Among them, medical AI has been drawing huge interest and popularity in Computer-Aided Diagnosis, which collects human body signals to predict abnormal symptoms of health, and diagnoses diseases through medical images such as X-ray and CT. Since X-ray and CT in medicine use high-resolution images, they require high specification equipment and huge energy consumption due to high computation in learning and recognition, incurring huge costs to create an environment for operation. Thus, this paper proposes a chest X-ray outlier detection model using dimension reduction and edge detection to solve these issues. The proposed method scans an X-ray image using a window of a certain size, conducts difference imaging of adjacent segment-images, and extracts the edge information in a binary format through the AND operation. To convert the extracted edge, which is visual information, into a series of lines, it is computed in convolution with the detection filter that has a coefficient of 2 n and the lines are divided into 16 types. By counting the converted data, a one-dimensional 16-size array per one segment-image is produced, and this reduced data is used as an input to the RNN-based learning model. In addition, the study conducted various experiments based on the COVID-chest X-ray dataset to evaluate the performance of the proposed model. According to the experiment results, the LFA-RNN showed the highest accuracy at 97.5% in the learning calculated through learning, followed by CRNN 96.1%, VGG 96.6%, AlexNet 94.1%, Conv1D 79.4%, and DNN 78.9%. In addition, LFA-RNN showed the lowest loss at about 0.0357.
ISSN:2169-3536
2169-3536
DOI:10.1109/ACCESS.2021.3086103