HUB



Efficient Language Model Adaptation with Noise Contrastive Estimation and Kullback-Leibler Regularization

Jesús Andrés-Ferrer, Nathan Bodenstab and Paul Vozila

Abstract:

Many language modeling (LM) tasks have limited in-domain data for training. Exploiting out-of-domain data while retaining the relevant in-domain statistics is a desired property in these scenarios. Kullback-Leibler Divergence (KLD) regularization is a popular method for acoustic model (AM) adaptation. KLD regularization assumes that the last layer is a softmax that fully activates the targets of both in-domain and out-of-domain models. Unfortunately, this softmax activation is computationally prohibitive for language modeling where the number of output classes is large, typically 50k to 100K, but may even exceed 800k in some cases. The computational bottleneck of the softmax during LM training can be reduced by an order of magnitude using techniques such as noise contrastive estimation (NCE), which replaces the cross-entropy loss function with a binary classification problem between the target output and random noise samples. In this work we combine NCE and KLD regularization and offer a fast domain adaptation method for LM training, while also retaining important attributes of the original NCE, such as self-normalization. We show on a medical domain-adaptation task that our method improves perplexity by 10.1% relative to a strong LSTM baseline.


Cite as: Andrés-Ferrer, J., Bodenstab, N., Vozila, P. (2018) Efficient Language Model Adaptation with Noise Contrastive Estimation and Kullback-Leibler Regularization. Proc. Interspeech 2018, 3368-3372, DOI: 10.21437/Interspeech.2018-1345.


BiBTeX Entry:

@inproceedings{Andrés-Ferrer2018,
author={Jesús Andrés-Ferrer and Nathan Bodenstab and Paul Vozila},
title={Efficient Language Model Adaptation with Noise Contrastive Estimation and Kullback-Leibler Regularization},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={3368--3372},
doi={10.21437/Interspeech.2018-1345},
url={http://dx.doi.org/10.21437/Interspeech.2018-1345} }