Loading…

Convolution Neural Network Having Multiple Channels with Own Attention Layer for Depression Detection from Social Data

People share textual posts about their interests, routines, and moods on social platforms, which can be targeted to evaluate their mental state using diverse techniques such as lexical approaches, machine learning (ML), and deep learning (DL). Bigger grams (bi, tri, or quad) carry more contextual in...

Full description

Saved in:
Bibliographic Details
Published in:New generation computing 2024-03, Vol.42 (1), p.135-155
Main Authors: Dalal, Sumit, Jain, Sarika, Dave, Mayank
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Items that cite this one
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:People share textual posts about their interests, routines, and moods on social platforms, which can be targeted to evaluate their mental state using diverse techniques such as lexical approaches, machine learning (ML), and deep learning (DL). Bigger grams (bi, tri, or quad) carry more contextual information than unigrams. However, most of the models used in the classification of depression include only unigrams. Moreover, the well-known depression classifiers, the recurrent neural networks (RNN), retain only the sequential information of the text and ignores the local features of postings. We suggest using a convolutional neural network of multiple channels (MCNN) to capture local features and larger context from user posts. Also, each channel has a dedicated dot-product attention layer to capture global features from local features of various context levels. The proposed model is tested on a depression dataset CLEF-eRisk 2018 with 214 depressed and 1493 non-depressed users’ posts. Experimental results show that our model achieved competitive accuracy, recall, and f-score of 91.00%, 76.50%, and 70.51%, respectively. Accuracy is up to 5.00% higher and recall is approximately 24% higher than multi-channel CNN without an attention layer. Significant grams highlighted by the attention mechanism can be employed to provide a user-level explanation for the depression classification results. However, directly incorporating the attention weights might not be helpful as attention highlightings are dense and entangled.
ISSN:0288-3635
1882-7055
DOI:10.1007/s00354-023-00237-y