Acoustic Modeling Using Adversarially Trained Variational Recurrent Neural Network for Speech Synthesis
Joun Yeop Lee, Sung Jun Cheon, Byoung Jin Choi, Nam Soo Kim and Eunwoo Song
Abstract:
In this paper, we propose a variational recurrent neural network (VRNN) based method for modeling and generating speech parameter sequences. In recent years, the performance of speech synthesis systems has been improved over conventional techniques thanks to deep learning-based acoustic models. Among the popular deep learning techniques, recurrent neural networks (RNNs) has been successful in modeling time-dependent sequential data efficiently. However, due to the deterministic nature of RNNs prediction, such models do not reflect the full complexity of highly structured data, like natural speech. In this regard, we propose adversarially trained variational recurrent neural network (AdVRNN) which use VRNN to better represent the variability of natural speech for acoustic modeling in speech synthesis. Also, we apply adversarial learning scheme in training AdVRNN to overcome oversmoothing problem. We conducted comparative experiments for the proposed VRNN with the conventional gated recurrent unit which is one of RNNs, for speech synthesis system. It is shown that the proposed AdVRNN based method performed better than the conventional GRU technique.
Cite as: Lee, J.Y., Cheon, S.J., Choi, B.J., Kim, N.S., Song, E. (2018) Acoustic Modeling Using Adversarially Trained Variational Recurrent Neural Network for Speech Synthesis. Proc. Interspeech 2018, 917-921, DOI: 10.21437/Interspeech.2018-1598.
BiBTeX Entry:
@inproceedings{Lee2018,
author={Joun Yeop Lee and Sung Jun Cheon and Byoung Jin Choi and Nam Soo Kim and Eunwoo Song},
title={Acoustic Modeling Using Adversarially Trained Variational Recurrent Neural Network for Speech Synthesis},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={917--921},
doi={10.21437/Interspeech.2018-1598},
url={http://dx.doi.org/10.21437/Interspeech.2018-1598} }