Loading…

An optimized model based on adaptive convolutional neural network and grey wolf algorithm for breast cancer diagnosis

Medical image classification (IC) is a method for categorizing images according to the appropriate pathological stage. It is a crucial stage in computer-aided diagnosis (CAD) systems, which were created to help radiologists with reading and analyzing medical images as well as with the early detectio...

Full description

Saved in:
Bibliographic Details
Published in:PloS one 2024-08, Vol.19 (8), p.e0304868
Main Authors: Alnowaiser, Khaled, Saber, Abeer, Hassan, Esraa, Awad, Wael A
Format: Article
Language:English
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Medical image classification (IC) is a method for categorizing images according to the appropriate pathological stage. It is a crucial stage in computer-aided diagnosis (CAD) systems, which were created to help radiologists with reading and analyzing medical images as well as with the early detection of tumors and other disorders. The use of convolutional neural network (CNN) models in the medical industry has recently increased, and they achieve great results at IC, particularly in terms of high performance and robustness. The proposed method uses pre-trained models such as Dense Convolutional Network (DenseNet)-121 and Visual Geometry Group (VGG)-16 as feature extractor networks, bidirectional long short-term memory (BiLSTM) layers for temporal feature extraction, and the Support Vector Machine (SVM) and Random Forest (RF) algorithms to perform classification. For improved performance, the selected pre-trained CNN hyperparameters have been optimized using a modified grey wolf optimization method. The experimental analysis for the presented model on the Mammographic Image Analysis Society (MIAS) dataset shows that the VGG16 model is powerful for BC classification with overall accuracy, sensitivity, specificity, precision, and area under the ROC curve (AUC) of 99.86%, 99.9%, 99.7%, 97.1%, and 1.0, respectively, on the MIAS dataset and 99.4%, 99.03%, 99.2%, 97.4%, and 1.0, respectively, on the INbreast dataset.
ISSN:1932-6203
1932-6203
DOI:10.1371/journal.pone.0304868