HUB



Encoder Transfer for Attention-based Acoustic-to-word Speech Recognition

Sei Ueno, Takafumi Moriya, Masato Mimura, Shinsuke Sakai, Yusuke Shinohara, Yoshikazu Yamaguchi, Yushi Aono and Tatsuya Kawahara

Abstract:

Acoustic-to-word speech recognition based on attention-based encoder-decoder models achieves better accuracies with much lower latency than the conventional speech recognition systems. However, acoustic-to-word models require a very large amount of training data and it is difficult to prepare one for a new domain such as elderly speech. To address the problem, we propose domain adaptation based on transfer learning with layer freezing. Layer freezing first pre-trains a network with the source domain data and then a part of parameters is re-trained for the target domain while the rest is fixed. In the attention-based acoustic-to-word model, the encoder part is frozen to maintain the generality and only the decoder part is re-trained to adapt to the target domain. This substantially allows for adaptation of the latent linguistic capability of the decoder to the target domain. Using a large-scale Japanese spontaneous speech corpus as source, the proposed method is applied to three target domains: a call center task and two voice search tasks by adults and by elderly. The models trained with the proposed method achieved better accuracy than the baseline models, which are trained from scratch or entirely re-trained with the target domain.


Cite as: Ueno, S., Moriya, T., Mimura, M., Sakai, S., Shinohara, Y., Yamaguchi, Y., Aono, Y., Kawahara, T. (2018) Encoder Transfer for Attention-based Acoustic-to-word Speech Recognition. Proc. Interspeech 2018, 2424-2428, DOI: 10.21437/Interspeech.2018-1424.


BiBTeX Entry:

@inproceedings{Ueno2018,
author={Sei Ueno and Takafumi Moriya and Masato Mimura and Shinsuke Sakai and Yusuke Shinohara and Yoshikazu Yamaguchi and Yushi Aono and Tatsuya Kawahara},
title={Encoder Transfer for Attention-based Acoustic-to-word Speech Recognition},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={2424--2428},
doi={10.21437/Interspeech.2018-1424},
url={http://dx.doi.org/10.21437/Interspeech.2018-1424} }