Early Detection of Continuous and Partial Audio Events Using CNN
Ian McLoughlin, Yan Song, Lam Dang Pham, Ramaswamy Palaniappan, Huy Phan and Yue Lang
Abstract:
Sound event detection is an extension of the static auditory classification task into continuous environments, where performance depends jointly upon the detection of overlapping events and their correct classification. Several approaches have been published to date which either develop novel classifiers or employ well-trained static classifiers with a detection front-end. This paper takes the latter approach, by combining a proven CNN classifier acting on spectrogram image features, with time-frequency shaped energy detection that identifies seed regions within the spectrogram that are characteristic of auditory energy events. Furthermore, the shape detector is optimised to allow early detection of events as they are developing. Since some sound events naturally have longer durations than others, waiting until completion of entire events before classification may not be practical in a deployed system. The early detection capability of the system is thus evaluated for the classification of partial events. Performance for continuous event detection is shown to be good, with accuracy being maintained well when detecting partial events.
Cite as: McLoughlin, I., Song, Y., Pham, L.D., Palaniappan, R., Phan, H., Lang, Y. (2018) Early Detection of Continuous and Partial Audio Events Using CNN. Proc. Interspeech 2018, 3314-3318, DOI: 10.21437/Interspeech.2018-1821.
BiBTeX Entry:
@inproceedings{McLoughlin2018,
author={Ian McLoughlin and Yan Song and Lam Dang Pham and Ramaswamy Palaniappan and Huy Phan and Yue Lang},
title={Early Detection of Continuous and Partial Audio Events Using CNN},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={3314--3318},
doi={10.21437/Interspeech.2018-1821},
url={http://dx.doi.org/10.21437/Interspeech.2018-1821} }