Loading…
Self-attention and asymmetric multi-layer perceptron-gated recurrent unit blocks for protein secondary structure prediction
Protein secondary structure prediction (PSSP) is one of the most prominent and widely-conducted tasks in Bioinformatics. Deep neural networks have become the primary methods for building PSSP models in the last decade due to their potential to enhance PSSP performances. However, there is room for im...
Saved in:
Published in: | Applied soft computing 2024-07, Vol.159, p.111604, Article 111604 |
---|---|
Main Authors: | , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Protein secondary structure prediction (PSSP) is one of the most prominent and widely-conducted tasks in Bioinformatics. Deep neural networks have become the primary methods for building PSSP models in the last decade due to their potential to enhance PSSP performances. However, there is room for improvement in PSSP as previous studies have yet to reach the theoretical limit of PSSP model performance. In this work, we propose a PSSP model called SADGRU-SS, which is built with a novel and unique deep learning architecture that utilizes self-attention, asymmetric multi-layer perceptron (MLP)-gated recurrent unit (GRU) blocks, and a dense block for solving the PSSP problem. Our experiment results show that using self-attention in the SADGRU-SS architecture has successfully increased SADGRU-SS performance. Moreover, installing self-attention in the frontmost position of the networks produces better performance than locating it in other positions. Using the asymmetric configuration in the MLP-GRU blocks results in more excellent performance than the symmetric ones. Our model is trained using the standard CB6133-filtered dataset. We evaluate the performance of our model using the standard CB513 test dataset. Our experiment shows that the performance of our model on 8-state PSSP outstands other PSSP models. The model achieves 70.74% and 82.78% prediction accuracy in the 8-state and 3-state PSSP, respectively.
•The SADGRU-SS model uses self-attention, asymmetric MLP-GRU blocks, and a dense block.•The use of self-attention enhances the model’s performance significantly.•The asymmetric configuration of MLP-GRU blocks raises the model’s performance.•Our model outstands other similar PSSP models in 8-state PSSP. |
---|---|
ISSN: | 1568-4946 1872-9681 |
DOI: | 10.1016/j.asoc.2024.111604 |