Not Logged In

RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems

Full Text: 16179-76512-1-PB.pdf PDF

Open-domain human-computer conversation has been attracting increasing attention over the past few years. However, there does not exist a standard automatic evaluation metric for open-domain dialog systems; researchers usually resort to human annotation for model evaluation, which is time- and labor-intensive. In this paper, we propose RUBER, a Referenced metric and Unreferenced metric Blended Evaluation Routine, which evaluates a reply by taking into consideration both a groundtruth reply and a query (previous user-issued utterance). Our metric is learnable, but its training does not require labels of human satisfaction. Hence, RUBER is flexible and extensible to different datasets and languages. Experiments on both retrieval and generative dialog systems show that RUBER has a high correlation with human annotation, and that RUBER has fair transferability over different datasets.

Citation

C. Tao, L. Mou, D. Zhao, R. Yan. "RUBER: An Unsupervised Method for Automatic Evaluation of Open-Domain Dialog Systems". National Conference on Artificial Intelligence (AAAI), pp 722-729, February 2018.

Keywords: Natural Language Understanding, Dialog System
Category: In Conference
Web Links: AAAI

BibTeX

@incollection{Tao+al:AAAI18,
  author = {Chongyang Tao and Lili Mou and Dongyan Zhao and Rui Yan},
  title = {RUBER: An Unsupervised Method for Automatic Evaluation of
    Open-Domain Dialog Systems},
  Pages = {722-729},
  booktitle = {National Conference on Artificial Intelligence (AAAI)},
  year = 2018,
}

Last Updated: February 03, 2021
Submitted by Sabina P

University of Alberta Logo AICML Logo