HUB



Speaker Adaptation and Adaptive Training for Jointly Optimised Tandem Systems

Yu Wang, Chao Zhang, Mark Gales and Philip Woodland

Abstract:

Speaker independent (SI) Tandem systems trained by joint optimisation of bottleneck (BN) deep neural networks (DNNs) and Gaussian mixture models (GMMs) have been found to produce similar word error rates (WERs) to Hybrid DNN systems. A key advantage of using GMMs is that existing speaker adaptation methods, such as maximum likelihood linear regression (MLLR), can be used which to account for diverse speaker variations and improve system robustness. This paper investigates speaker adaptation and adaptive training (SAT) schemes for jointly optimised Tandem systems. Adaptation techniques investigated include constrained MLLR (CMLLR) transforms based on BN features for SAT as well as MLLR and parameterised sigmoid functions for unsupervised test-time adaptation. Experiments using English multi-genre broadcast (MGB3) data show that CMLLR SAT yields a 4% relative WER reduction over jointly trained Tandem and Hybrid SI systems and further reductions in WER are obtained by system combination.


Cite as: Wang, Y., Zhang, C., Gales, M., Woodland, P. (2018) Speaker Adaptation and Adaptive Training for Jointly Optimised Tandem Systems. Proc. Interspeech 2018, 872-876, DOI: 10.21437/Interspeech.2018-2432.


BiBTeX Entry:

@inproceedings{Wang2018,
author={Yu Wang and Chao Zhang and Mark Gales and Philip Woodland},
title={Speaker Adaptation and Adaptive Training for Jointly Optimised Tandem Systems},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={872--876},
doi={10.21437/Interspeech.2018-2432},
url={http://dx.doi.org/10.21437/Interspeech.2018-2432} }