A Comparative Study on Regularization Strategies for Embedding-based Neural Networks
Full Text: D15-1252.pdfThis paper aims to compare different regularization strategies to address a common phenomenon, severe overfitting, in embedding-based neural networks for NLP. We chose two widely studied neural models and tasks as our testbed. We tried several frequently applied or newly proposed regularization strategies, including penalizing weights (embeddings excluded), penalizing embeddings, reembedding words, and dropout. We also emphasized on incremental hyperparameter tuning, and combining different regularizations. The results provide a picture on tuning hyperparameters for neural NLP models.
Citation
H. Peng, L. Mou, G. Li, Y. Chen, Y. Lu, Z. Jin. "A Comparative Study on Regularization Strategies for Embedding-based Neural Networks". EMNLP - Conference on Empirical Methods in Natural Language Processing, pp 2106–2111, September 2015.Keywords: | |
Category: | In Conference |
Web Links: | doi |
ACL |
BibTeX
@incollection{Peng+al:(EMNLP)15, author = {Hao Peng and Lili Mou and Ge Li and Yunchuan Chen and Yangyang Lu and Zhi Jin}, title = {A Comparative Study on Regularization Strategies for Embedding-based Neural Networks}, Pages = {2106–2111}, booktitle = {EMNLP - Conference on Empirical Methods in Natural Language Processing}, year = 2015, }Last Updated: February 04, 2021
Submitted by Sabina P