Loading…
Multimodal autism detection: Deep hybrid model with improved feature level fusion
•This article intends to propose a Multimodal Autism Detection with Deep Hybrid Model (MADDHM) considering two modalities like Face and EEG.•Once after the extraction, the features are fused via improved feature level fusion.•The hybrid model includes the models like CNN and Bi-GRU.•The performance...
Saved in:
Published in: | Computer methods and programs in biomedicine 2024-12, Vol.260, p.108492, Article 108492 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | •This article intends to propose a Multimodal Autism Detection with Deep Hybrid Model (MADDHM) considering two modalities like Face and EEG.•Once after the extraction, the features are fused via improved feature level fusion.•The hybrid model includes the models like CNN and Bi-GRU.•The performance of proposed is validated over the traditional model with different performance measures.
Social communication difficulties are a characteristic of autism spectrum disorder (ASD), a neurodevelopmental condition. The earlier method of diagnosing autism largely relied on error-prone behavioral observation of symptoms. More intelligence approaches are in progress to diagnose the disorder, which still demands improvement in prediction accuracy. Furthermore, computer-aided design systems based on machine learning algorithms are extremely time-consuming and difficult to design. This study used deep learning techniques to develop a novel autism detection model in order to overcome these problems.
Preprocessing, Features extraction, Improved Feature level Fusion, and Detection are the phases of the suggested autism detection methodology. First, both input modalities will be preprocessed so they are ready for the next stages to be processed. In this case, the facial picture is preprocessed utilizing the Gabor filtering technique, while the input EEG data is preprocessed through Wiener filtering. Subsequently, features are extracted from the modalities, from the EEG signal data, features like Common Spatial Pattern (CSP), Improved Singular Spectrum Entropy, and correlation dimension, are extracted. From the face image, features like the Improved Active Appearance model, Gray-Level Co-occurrence matrix (GLCM) features and Proposed Shape Local Binary Texture (SLBT), as well are retrieved. Following extraction, enhanced feature-level fusion is performed to fuse the features. Ultimately, the combined features are fed into the hybrid model to complete the diagnosis. Models such as Convolutional Neural Networks (CNN) and Bidirectional Gated Recurrent Units (Bi-GRU) are part of the hybrid model.
The suggested MADDHM model achieved an accuracy of about 91.03 % regarding EEG and 91.67 % regarding face analysis meanwhile, SVM=87.49 %, DNN=88.59 %, Bi-GRU=90.02 %, LSTM=87.49 % and CNN=82.02 %.
As a result, the suggested methodology provides encouraging outcomes and opens up possibilities for early autism detection. The development of such models is not only a technical achievement bu |
---|---|
ISSN: | 0169-2607 1872-7565 1872-7565 |
DOI: | 10.1016/j.cmpb.2024.108492 |