Disentangled Representation Learning for Non-Parallel Text Style Transfer
Full Text: P19-1041.pdf
This paper tackles the problem of disentangling the latent representations of style and content in language models. We propose a simple yet effective approach, which incorporates auxiliary multi-task and adversarial objectives, for style prediction and bag-of-words prediction, respectively. We show, both qualitatively and quantitatively, that the style and content are indeed disentangled in the latent space. This disentangled latent representation learning can be applied to style transfer on non-parallel corpora. We achieve high performance in terms of transfer accuracy, content preservation, and language fluency, in comparison to various previous approaches.
Citation
V. John, L. Mou, H. Bahuleyan, O. Vechtomova. "Disentangled Representation Learning for Non-Parallel Text Style Transfer". International Conference on Computational Linguistics and the Association for Computational Linguist, pp 424–434, July 2019.Keywords: | |
Category: | In Conference |
Web Links: | ACL |
DOI |
BibTeX
@incollection{John+al:ACL19, author = {Vineet John and Lili Mou and Hareesh Bahuleyan and Olga Vechtomova}, title = {Disentangled Representation Learning for Non-Parallel Text Style Transfer}, Pages = {424–434}, booktitle = {International Conference on Computational Linguistics and the Association for Computational Linguist}, year = 2019, }Last Updated: February 02, 2021
Submitted by Sabina P