HUB



R-CRNN: Region-based Convolutional Recurrent Neural Network for Audio Event Detection

Chieh-Chi Kao, Weiran Wang, Ming Sun and Chao Wang

Abstract:

This paper proposes a Region-based Convolutional Recurrent Neural Network (R-CRNN) for audio event detection (AED). The proposed network is inspired by Faster-RCNN [1], a well-known region-based convolutional network framework for visual object detection. Different from the original Faster-RCNN, a recurrent layer is added on top of the convolutional network to capture the long-term temporal context from the extracted high-level features. While most of the previous works on AED generate predictions at frame level first and then use post-processing to predict the onset/offset timestamps of events from a probability sequence; the proposed method generates predictions at event level directly and can be trained end-to-end with a multi-task loss, which optimizes the classification and localization of audio events simultaneously. The proposed method is tested on DCASE 2017 Challenge dataset [2]. To the best of our knowledge, R-CRNN is the best performing single-model method among all methods without using ensembles both on development and evaluation sets. Compared to the other region-based network for AED (R-FCN [3]) with an event-based error rate (ER) of 0.18 on the development set, our method reduced the ER to half.


Cite as: Kao, C., Wang, W., Sun, M., Wang, C. (2018) R-CRNN: Region-based Convolutional Recurrent Neural Network for Audio Event Detection. Proc. Interspeech 2018, 1358-1362, DOI: 10.21437/Interspeech.2018-2323.


BiBTeX Entry:

@inproceedings{Kao2018,
author={Chieh-Chi Kao and Weiran Wang and Ming Sun and Chao Wang},
title={R-CRNN: Region-based Convolutional Recurrent Neural Network for Audio Event Detection},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={1358--1362},
doi={10.21437/Interspeech.2018-2323},
url={http://dx.doi.org/10.21437/Interspeech.2018-2323} }