Attention-based End-to-End Models for Small-Footprint Keyword Spotting
Changhao Shan, Junbo Zhang, Yujun Wang and Lei Xie
Abstract:
In this paper, we propose an attention-based end-to-end neural approach for small-footprint keyword spotting (KWS), which aims to simplify the pipelines of building a production-quality KWS system. Our model consists of an encoder and an attention mechanism. The encoder transforms the input signal into a high level representation using RNNs. Then the attention mechanism weights the encoder features and generates a fixed-length vector. Finally, by linear transformation and softmax function, the vector becomes a score used for keyword detection. We also evaluate the performance of different encoder architectures, including LSTM, GRU and CRNN. Experiments on real-world wake-up data show that our approach outperforms the recent Deep KWS approach by a large margin and the best performance is achieved by CRNN. To be more specific, with ~84K parameters, our attention-based model achieves 1.02% false rejection rate (FRR) at 1.0 false alarm (FA) per hour.
Cite as: Shan, C., Zhang, J., Wang, Y., Xie, L. (2018) Attention-based End-to-End Models for Small-Footprint Keyword Spotting. Proc. Interspeech 2018, 2037-2041, DOI: 10.21437/Interspeech.2018-1777.
BiBTeX Entry:
@inproceedings{Shan2018,
author={Changhao Shan and Junbo Zhang and Yujun Wang and Lei Xie},
title={Attention-based End-to-End Models for Small-Footprint Keyword Spotting},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={2037--2041},
doi={10.21437/Interspeech.2018-1777},
url={http://dx.doi.org/10.21437/Interspeech.2018-1777} }