HUB



What Do Classifiers Actually Learn? a Case Study on Emotion Recognition Datasets

Patrick Meyer, Eric Buschermöhle and Tim Fingscheidt

Abstract:

In supervised learning, a typical method to ensure that a classifier has desirable generalization properties, is to split the available data into training, validation and test subsets. Given a proper data split, we typically then trust our results on the test data. But what do classifiers actually learn? In this case study we show how important it is to analyze precisely the available data, its inherent dependencies w.r.t. class labels and present an example of a popular database for speech emotion recognition, where a minor change of the data split results in an accuracy decrease of about 55% absolute, leading to the conclusion that linguistic content has been learned instead of the desired speech emotions.


Cite as: Meyer, P., Buschermöhle, E., Fingscheidt, T. (2018) What Do Classifiers Actually Learn? a Case Study on Emotion Recognition Datasets. Proc. Interspeech 2018, 262-266, DOI: 10.21437/Interspeech.2018-1851.


BiBTeX Entry:

@inproceedings{Meyer2018,
author={Patrick Meyer and Eric Buschermöhle and Tim Fingscheidt},
title={What Do Classifiers Actually Learn? a Case Study on Emotion Recognition Datasets},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={262--266},
doi={10.21437/Interspeech.2018-1851},
url={http://dx.doi.org/10.21437/Interspeech.2018-1851} }