HUB



What to Expect from Expected Kneser-Ney Smoothing

Michael Levit, Sarangarajan Parthasarathy and Shuangyu Chang

Abstract:

Kneser-Ney smoothng on expected counts was proposed recently. In this paper we revisit this technique and suggest a number of optimizations and extensions. We then analyse its performance in several practical speech recognition scenarios that depend on fractional sample counts, such as training on uncertain data, language model adaptation and Word-Phrase-Entity models. We show that the proposed approach to smoothing outperforms known alternatives by a significant margin.


Cite as: Levit, M., Parthasarathy, S., Chang, S. (2018) What to Expect from Expected Kneser-Ney Smoothing. Proc. Interspeech 2018, 3378-3382, DOI: 10.21437/Interspeech.2018-84.


BiBTeX Entry:

@inproceedings{Levit2018,
author={Michael Levit and Sarangarajan Parthasarathy and Shuangyu Chang},
title={What to Expect from Expected Kneser-Ney Smoothing},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={3378--3382},
doi={10.21437/Interspeech.2018-84},
url={http://dx.doi.org/10.21437/Interspeech.2018-84} }