Not Logged In

Why Do Neural Dialog Systems Generate Short and Meaningless Replies? a Comparison between Dialog and Translation

This paper addresses the question: In neural dialog systems, why do sequence-to-sequence (Seq2Seq) neural networks generate short and meaningless replies for open-domain response generation? We conjecture that in a dialog system, due to the randomness of spoken language, there may be multiple equally plausible replies for one utterance, causing the deficiency of a Seq2Seq model. To evaluate our conjecture, we propose a systematic way to mimic the dialog scenario in machine translation systems with both real datasets and toy datasets generated elaborately. Experimental results show that we manage to reproduce the phenomenon of generating short and meaningless sentences in the translation setting.

Citation

B. Wei, S. Lu, L. Mou, H. Zhou, P. Poupart, G. Li, Z. Jin. "Why Do Neural Dialog Systems Generate Short and Meaningless Replies? a Comparison between Dialog and Translation". ICASSP, pp 7290-7294, May 2019.

Keywords:  
Category: In Conference
Web Links: IEEE

BibTeX

@incollection{Wei+al:ICASSP19,
  author = {Bolin Wei and Shuai Lu and Lili Mou and Hao Zhou and Pascal Poupart
    and Ge Li and Zhi Jin},
  title = {Why Do Neural Dialog Systems Generate Short and Meaningless Replies?
    a Comparison between Dialog and Translation},
  Pages = {7290-7294},
  booktitle = {},
  year = 2019,
}

Last Updated: March 02, 2021
Submitted by Sabina P

University of Alberta Logo AICML Logo