Loading…

ERT-GFAN: A multimodal drug–target interaction prediction model based on molecular biology and knowledge-enhanced attention mechanism

In drug discovery, precisely identifying drug–target interactions is crucial for finding new drugs and understanding drug mechanisms. Evolving drug/target heterogeneous data presents challenges in obtaining multimodal representation in drug–target prediction(DTI). To deal with this, we propose ‘ERT-...

Full description

Saved in:
Bibliographic Details
Published in:Computers in biology and medicine 2024-09, Vol.180, p.109012, Article 109012
Main Authors: Cheng, Xiaoqing, Yang, Xixin, Guan, Yuanlin, Feng, Yihan
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:In drug discovery, precisely identifying drug–target interactions is crucial for finding new drugs and understanding drug mechanisms. Evolving drug/target heterogeneous data presents challenges in obtaining multimodal representation in drug–target prediction(DTI). To deal with this, we propose ‘ERT-GFAN’, a multimodal drug–target interaction prediction model inspired by molecular biology. Firstly, it integrates bio-inspired principles to obtain structure feature of drugs and targets using Extended Connectivity Fingerprints(ECFP). Simultaneously, the knowledge graph embedding model RotatE is employed to discover the interaction feature of drug–target pairs. Subsequently, Transformer is utilized to refine the contextual neighborhood features from the obtained structure feature and interaction features, and multi-modal high-dimensional fusion features of the three-modal information constructed. Finally, the final DTI prediction results are outputted by integrating the multimodal fusion features into a graphical high-dimensional fusion feature attention network (GFAN) using our innovative multimodal high-dimensional fusion feature attention. This multimodal approach offers a comprehensive understanding of drug–target interactions, addressing challenges in complex knowledge graphs. By combining structure feature, interaction feature, and contextual neighborhood features, ‘ERT-GFAN’ excels in predicting DTI. Empirical evaluations on three datasets demonstrate our method’s superior performance, with AUC of 0.9739, 0.9862, and 0.9667, AUPR of 0.9598, 0.9789, and 0.9750, and Mean Reciprocal Rank(MRR) of 0.7386, 0.7035, and 0.7133. Ablation studies show over a 5% improvement in predictive performance compared to baseline unimodal and bimodal models. These results, along with detailed case studies, highlight the efficacy and robustness of our approach. •In this paper, we propose a multi-modal model for drug-target interaction.•Extended Connectivity Fingerprint algorithms were combined with RotatE knowledge embedding to obtain structure feature and interaction feature.•Using transformer with multi-head self-attention to discover contextual neighborhood feature•Graph high-dimensional feature fusion feature network(GFAN) employs knowledge-enhanced attention to predict drug-target interaction.•Experiments and case study show our method’s superiority in drug-target interaction over current methods.
ISSN:0010-4825
1879-0534
1879-0534
DOI:10.1016/j.compbiomed.2024.109012