Loading…

Atom search‐Jaya‐based deep recurrent neural network for liver cancer detection

Automatic detection of liver cancer is the fundamental requirement of computer‐aided diagnosis in the clinical sector. The traditional methods used in the liver detection process are not effective in accurately detecting the tumour region using a large‐sized dataset. Moreover, segmenting the large i...

Full description

Saved in:
Bibliographic Details
Published in:IET image processing 2021-02, Vol.15 (2), p.337-349
Main Authors: Navaneethakrishnan, Mariappan, Vairamuthu, Subbiah, Parthasarathy, Govindaswamy, Cristin, Rajan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Automatic detection of liver cancer is the fundamental requirement of computer‐aided diagnosis in the clinical sector. The traditional methods used in the liver detection process are not effective in accurately detecting the tumour region using a large‐sized dataset. Moreover, segmenting the large intensity of the tumour region is a complex issue with the existing methods. To overcome these issues, an accurate and efficient liver cancer detection method named atom search‐Jaya‐based deep recurrent neural network is proposed in this research. The proposed method mimics the atomic motion based on the interaction forces and the constraint forces of the hybrid molecules. The optimal solution is revealed through the fitness measure, which in turn accepts the minimal error value as the optimal solution. The weights of the classifier are optimally updated based on the position of the atom with respect to the iterations. The proposed atom search‐Jaya‐based deep recurrent neural network attained significantly better performance in accurately detecting the tumor region using the exploration ability of atoms in the search space. The results obtained by the proposed model in terms of accuracy, specificity, sensitivity, and precision are 93.64%, 96%, 95%, and 94.88%, respectively, while considering the three features using 80% of training data.
ISSN:1751-9659
1751-9667
DOI:10.1049/ipr2.12019