HUB



Fast Variational Bayes for Heavy-tailed PLDA Applied to i-vectors and x-vectors

Anna Silnova, Niko Brümmer, Daniel Garcia-Romero, David Snyder and Lukáš Burget

Abstract:

The standard state-of-the-art backend for text-independent speaker recognizers that use i-vectors or x-vectors is Gaussian PLDA (G-PLDA), assisted by a Gaussianization step involving length normalization. G-PLDA can be trained with both gener- ative or discriminative methods. It has long been known that heavy-tailed PLDA (HT-PLDA), applied without length nor- malization, gives similar accuracy, but at considerable extra computational cost. We have recently introduced a fast scor- ing algorithm for a discriminatively trained HT-PLDA back- end. This paper extends that work by introducing a fast, vari- ational Bayes, generative training algorithm. We compare old and new backends, with and without length-normalization, with i-vectors and x-vectors, on SRE’10, SRE’16 and SITW.


Cite as: Silnova, A., Brümmer, N., Garcia-Romero, D., Snyder, D., Burget, L. (2018) Fast Variational Bayes for Heavy-tailed PLDA Applied to i-vectors and x-vectors. Proc. Interspeech 2018, 72-76, DOI: 10.21437/Interspeech.2018-2128.


BiBTeX Entry:

@inproceedings{Silnova2018,
author={Anna Silnova and Niko Brümmer and Daniel Garcia-Romero and David Snyder and Lukáš Burget},
title={Fast Variational Bayes for Heavy-tailed PLDA Applied to i-vectors and x-vectors},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={72--76},
doi={10.21437/Interspeech.2018-2128},
url={http://dx.doi.org/10.21437/Interspeech.2018-2128} }