HUB



Multimodal Speech Synthesis Architecture for Unsupervised Speaker Adaptation

Hieu-Thi Luong and Junichi Yamagishi

Abstract:

This paper proposes a new architecture for speaker adaptation of multi-speaker neural-network speech synthesis systems in which an unseen speaker’s voice can be synthesized using a relatively small amount of speech data without transcriptions for adaptation. This is sometimes called “unsupervised speaker adaptation”. More specifically, we concatenate the layers to the audio inputs when performing unsupervised speaker adaptation while we concatenate them to the text inputs when synthesizing speech from a text. Two new training schemes for this new architecture are also proposed in this paper. These training schemes are not limited to speech synthesis; other applications are suggested. Experimental results show that the proposed model not only enables adaptation to unseen speakers using untranscribed speech but it also improves the performance of multi-speaker modeling and speaker adaptation using transcribed audio files.


Cite as: Luong, H., Yamagishi, J. (2018) Multimodal Speech Synthesis Architecture for Unsupervised Speaker Adaptation. Proc. Interspeech 2018, 2494-2498, DOI: 10.21437/Interspeech.2018-1791.


BiBTeX Entry:

@inproceedings{Luong2018,
author={Hieu-Thi Luong and Junichi Yamagishi},
title={Multimodal Speech Synthesis Architecture for Unsupervised Speaker Adaptation},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={2494--2498},
doi={10.21437/Interspeech.2018-1791},
url={http://dx.doi.org/10.21437/Interspeech.2018-1791} }