Loading…
Five machine learning-based radiomics models for preoperative prediction of histological grade in hepatocellular carcinoma
Purpose To compare the efficacy of radiomics models via five machine learning algorithms in predicting the histological grade of hepatocellular carcinoma (HCC) before surgery and to develop the most stable model to classify high-risk HCC patients. Methods Contrast-enhanced computed tomography (CECT)...
Saved in:
Published in: | Journal of cancer research and clinical oncology 2023-11, Vol.149 (16), p.15103-15112 |
---|---|
Main Authors: | , , , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Purpose
To compare the efficacy of radiomics models via five machine learning algorithms in predicting the histological grade of hepatocellular carcinoma (HCC) before surgery and to develop the most stable model to classify high-risk HCC patients.
Methods
Contrast-enhanced computed tomography (CECT) images of 175 HCC patients before surgery were analysed, and radiomics features were extracted from CECT images (including arterial and portal phases). Five machine learning models, including Bayes, random forest (RF), k-nearest neighbors (KNN), logistic regression (LR), and support vector machine (SVM), were applied to establish the model. The stability of the five models was weighed by the relative standard deviation (RSD), and the lowest RSD value was chosen as the most stable model to predict the histological grade of HCC. The area under the curve (AUC) and Delong tests were devoted to assessing the predictive efficacy of the models.
Results
High-grade HCC accounted for 28.57% (50/175) of the 175 patients. The RSD value of AUC via the RF machine learning model was the lowest (2.3%), followed by Bayes (3.2%), KNN (6.4%), SVM (8.7%) and LR (31.3%). In addition, the RF model (AUC = 0.995) was better than the other four models in the training set (p |
---|---|
ISSN: | 0171-5216 1432-1335 |
DOI: | 10.1007/s00432-023-05327-4 |