HUB



The Conversation: Deep Audio-Visual Speech Enhancement

Triantafyllos Afouras, Joon Son Chung and Andrew Zisserman

Abstract:

Our goal is to isolate individual speakers from multi-talker simultaneous speech in videos. Existing works in this area have focussed on trying to separate utterances from known speakers in controlled environments. In this paper, we propose a deep audio-visual speech enhancement network that is able to separate a speaker's voice given lip regions in the corresponding video, by predicting both the magnitude and the phase of the target signal. The method is applicable to speakers unheard and unseen during training and for unconstrained environments. We demonstrate strong quantitative and qualitative results, isolating extremely challenging real-world examples.


Cite as: Afouras, T., Chung, J.S., Zisserman, A. (2018) The Conversation: Deep Audio-Visual Speech Enhancement. Proc. Interspeech 2018, 3244-3248, DOI: 10.21437/Interspeech.2018-1400.


BiBTeX Entry:

@inproceedings{Afouras2018,
author={Triantafyllos Afouras and Joon Son Chung and Andrew Zisserman},
title={The Conversation: Deep Audio-Visual Speech Enhancement},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={3244--3248},
doi={10.21437/Interspeech.2018-1400},
url={http://dx.doi.org/10.21437/Interspeech.2018-1400} }