HUB



Joint Learning of Interactive Spoken Content Retrieval and Trainable User Simulator

Pei-Hung Chung, Kuan Tung, Ching-Lun Tai and Hung-yi Lee

Abstract:

User-machine interaction is crucial for information retrieval, especially for spoken content retrieval, because spoken content is difficult to browse and speech recognition has a high degree of uncertainty. In interactive retrieval, the machine takes different actions to interact with the user to obtain better retrieval results; here it is critical to select the most efficient action. In previous work, deep Q-learning techniques were proposed to train an interactive retrieval system but rely on a hand-crafted user simulator; building a reliable user simulator is difficult. In this paper, we further improve the interactive spoken content retrieval framework by proposing a learnable user simulator which is jointly trained with interactive retrieval system, making the hand-crafted user simulator unnecessary. The experimental results show that the learned simulated users not only achieve larger rewards than the hand-crafted ones but act more like real users.


Cite as: Chung, P., Tung, K., Tai, C., Lee, H. (2018) Joint Learning of Interactive Spoken Content Retrieval and Trainable User Simulator. Proc. Interspeech 2018, 2032-2036, DOI: 10.21437/Interspeech.2018-1346.


BiBTeX Entry:

@inproceedings{Chung2018,
author={Pei-Hung Chung and Kuan Tung and Ching-Lun Tai and Hung-yi Lee},
title={Joint Learning of Interactive Spoken Content Retrieval and Trainable User Simulator},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={2032--2036},
doi={10.21437/Interspeech.2018-1346},
url={http://dx.doi.org/10.21437/Interspeech.2018-1346} }