HUB



Lightly Supervised vs. Semi-supervised Training of Acoustic Model on Luxembourgish for Low-resource Automatic Speech Recognition

Karel Veselý, Carlos Segura, Igor Szöke, Jordi Luque and Jan Černocký

Abstract:

In this work, we focus on exploiting ‘inexpensive’ data in order to to improve the DNN acoustic model for ASR. We explore two strategies: The first one uses untranscribed data from the target domain. The second one is related to the proper selection of excerpts from imperfectly transcribed out-of-domain public data, as parliamentary speeches. We found out that both approaches lead to similar results, making them equally beneficial for practical use. The Luxembourgish ASR seed system had a 38.8% WER and it improved by roughly 4% absolute, leading to 34.6% for untranscribed and 34.9% for lightlysupervised data. Adding both databases simultaneously led to 34.4% WER, which is only a small improvement. As a secondary research topic, we experiment with semi-supervised state-level minimum Bayes risk (sMBR) training. Nonetheless, for sMBR we saw no improvement from adding the automatically transcribed target data, despite that similar techniques yield good results in the case of cross-entropy (CE) training.


Cite as: Veselý, K., Segura, C., Szöke, I., Luque, J., Černocký, J. (2018) Lightly Supervised vs. Semi-supervised Training of Acoustic Model on Luxembourgish for Low-resource Automatic Speech Recognition. Proc. Interspeech 2018, 2883-2887, DOI: 10.21437/Interspeech.2018-2361.


BiBTeX Entry:

@inproceedings{Veselý2018,
author={Karel Veselý and Carlos Segura and Igor Szöke and Jordi Luque and Jan Černocký},
title={Lightly Supervised vs. Semi-supervised Training of Acoustic Model on Luxembourgish for Low-resource Automatic Speech Recognition},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={2883--2887},
doi={10.21437/Interspeech.2018-2361},
url={http://dx.doi.org/10.21437/Interspeech.2018-2361} }