Not Logged In

How Transferable are Neural Networks in NLP Applications?

Full Text: D16-1046.pdf PDF

Transfer learning is aimed to make use of valuable knowledge in a source domain to help model performance in a target domain. It is particularly important to neural networks, which are very likely to be overfitting. In some fields like image processing, many studies have shown the effectiveness of neural network-based transfer learning. For neural NLP, however, existing studies have only casually applied transfer learning, and conclusions are inconsistent. In this paper, we conduct systematic case studies and provide an illuminating picture on the transferability of neural networks in NLP.

Citation

L. Mou, Z. Meng, R. Yan, G. Li, Y. Xu, L. Zhang, Z. Jin. "How Transferable are Neural Networks in NLP Applications?". EMNLP - Conference on Empirical Methods in Natural Language Processing, Austin, USA, pp 479–489, November 2016.

Keywords:  
Category: In Conference
Web Links: DOI
  ACL

BibTeX

@incollection{Mou+al:(EMNLP)16,
  author = {Lili Mou and Zhao Meng and Rui Yan and Ge Li and Yan Xu and Lu
    Zhang and Zhi Jin},
  title = {How Transferable are Neural Networks in NLP Applications?},
  Pages = {479–489},
  booktitle = {EMNLP - Conference on Empirical Methods in Natural Language
    Processing},
  year = 2016,
}

Last Updated: February 03, 2021
Submitted by Sabina P

University of Alberta Logo AICML Logo