HUB



Phoneme-to-Articulatory Mapping Using Bidirectional Gated RNN

Théo Biasutto-Lervat and Slim Ouni

Abstract:

Deriving articulatory dynamics from the acoustic speech signal has been addressed in several speech production studies. In this paper, we investigate whether it is possible to predict articulatory dynamics from phonetic information without having the acoustic speech signal. The input data may be considered as not sufficiently rich acoustically, as probably there is no explicit coarticulation information but we expect that the phonetic sequence provides compact yet rich knowledge. Motivated by the recent success of deep learning techniques used in the acoustic-to-articulatory inversion, we have experimented around the bidirectional gated recurrent neural network architectures. We trained these models with an EMA corpus and have obtained good performances similar to the state-of-the-art articulatory inversion from LSF features, but using only the phoneme labels and durations.


Cite as: Biasutto-Lervat, T., Ouni, S. (2018) Phoneme-to-Articulatory Mapping Using Bidirectional Gated RNN. Proc. Interspeech 2018, 3112-3116, DOI: 10.21437/Interspeech.2018-1202.


BiBTeX Entry:

@inproceedings{Biasutto-Lervat2018,
author={Théo Biasutto-Lervat and Slim Ouni},
title={Phoneme-to-Articulatory Mapping Using Bidirectional Gated RNN},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={3112--3116},
doi={10.21437/Interspeech.2018-1202},
url={http://dx.doi.org/10.21437/Interspeech.2018-1202} }