Loading…

Supervised Contrastive Learned Deep Model for Question Continuation Evaluation

Question continuation evaluation (QCE) is a branch task of dialogue act prediction (DAP) in the natural language processing area, which is aimed at predicting whether each question in a dialogue is worthy of being followed-up under a specific context. QCE is important for communication, education, a...

Full description

Saved in:
Bibliographic Details
Published in:IEEE transactions on human-machine systems 2023-06, Vol.53 (3), p.560-568
Main Authors: Sun, Bo, Li, Hang, He, Jun, Zhang, Yinghui
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Question continuation evaluation (QCE) is a branch task of dialogue act prediction (DAP) in the natural language processing area, which is aimed at predicting whether each question in a dialogue is worthy of being followed-up under a specific context. QCE is important for communication, education, and even entertainment. Regrettably, QCE has always been disregarded as an auxiliary task for conversational machine reading comprehension. QCE involves more information and relationships than the original DAP task, making it more complex. Moreover, the classification of QCE inherently renders the samples confusing. In this article, a transformer long short-term memory (LSTM)-based supervised contrastive learned model for QCE is proposed to automatically distribute QCE labels. This model is mainly constructed with transformer encoder blocks and LSTM modules, and supervised contrastive learning (SCL) is innovatively introduced to the training process. This model is good at extracting both information about corpora and the relationships among corpora, and SCL alleviates any confusion. With the only applicable dataset, i.e., Question Answering in Context (QuAC), experiments are conducted. This model is proven to perform well and is robust to missing data. The performance is 2.3% (accuracy) and 12.2% (macro-F1 score) higher than baselines from QuAC and only decreases by approximately 2.3% when 10% data remain.
ISSN:2168-2291
2168-2305
DOI:10.1109/THMS.2023.3271625