Loading…
Learning word representation by jointly using neighbor and syntactic contexts
Interpretability is a significant aspect of the distributed word representation learning model. Although the most advanced pretrained models have achieved the best results till date, the interpretability of a pretrained model is difficult to explain clearly. For this reason, based on the interpretab...
Saved in:
Published in: | Neurocomputing (Amsterdam) 2021-10, Vol.456, p.136-146 |
---|---|
Main Authors: | , , , , , |
Format: | Article |
Language: | English |
Subjects: | |
Citations: | Items that this one cites Items that cite this one |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Interpretability is a significant aspect of the distributed word representation learning model. Although the most advanced pretrained models have achieved the best results till date, the interpretability of a pretrained model is difficult to explain clearly. For this reason, based on the interpretability of distributed word embeddings, this paper presents a method of learning word representation using joint context. At present, the existing distributed word representation models for learning word representations usually focus on either neighbor or syntactic context. We argue that it is necessary to simultaneously model both contexts. In particular, the point mutual information obtained by combining the two types of contexts can efficiently express the correlation between the words. We propose two alternative distribution models for learning word representations by employing the neighbor and syntactic contexts via a simple and effective joint learning framework. Furthermore, the proposed models are trained on a public corpus, and the learned representations are evaluated in word analogy, word similarity, and sentence classification tasks. The experimental results demonstrate the potential of the proposed method. |
---|---|
ISSN: | 0925-2312 1872-8286 |
DOI: | 10.1016/j.neucom.2021.03.130 |