Loading…

SSM-Seq2Seq: A Novel Speaking Style Neural Conversation Model

Open domain personalized dialogue system has attracted more and more attention because of the ability of generating interesting and personalized responses. To incorporate speaking style, the existing methods first train respectively a response generator over a non-personalized conversational dataset...

Full description

Saved in:
Bibliographic Details
Published in:Journal of physics. Conference series 2020-06, Vol.1576 (1), p.12001
Main Authors: Wang, Boran, Sun, Yingxiang
Format: Article
Language:English
Subjects:
Citations: Items that this one cites
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Open domain personalized dialogue system has attracted more and more attention because of the ability of generating interesting and personalized responses. To incorporate speaking style, the existing methods first train respectively a response generator over a non-personalized conversational dataset and a speaking style extractor over a personalized non-conversational dataset, and then generate personalized responses by the parameter sharing mechanism. However, the training datasets' speaking styles of the response generator and speaking style extractor is totally different, which makes the performance of the existing methods be not optimal. Intuitively, it will improve the performance by decreasing the gap between two training datasets' speaking styles. Thus, in this paper, we propose a novel speaking style memory sequence-to-sequence (SSM-Seq2Seq) model, which incorporates the speaking style information from personalized non-conversational dataset into the training dataset of response generator to eliminate the gap. Extensive experiments show that the proposed approach yields great improvement over competitive baselines.
ISSN:1742-6588
1742-6596
DOI:10.1088/1742-6596/1576/1/012001