934 - Predicting speech intelligibility and security using artificial neural network models
Xu J.
Abstract
Artificial neural network models for predicting speech intelligibility scores and security thresholds have been developed in a previous work [Xu et al., “An artificial neural network approach for predicting architectural speech security,” Journal of the Acoustical Society of America, 117 (4), pp 1709-1712, 2005]. The present work uses an application example to show in detail how these models can be embedded into a spreadsheet application and implemented in the design stage. Using the same example, the present work also investigates how the speech intelligibility scores and security thresholds vary as a result of using different constructions for the common partition between the speech sound source room and speech sound receiving room. Results of the investigation show that, when the speech sound level is 68 dB(A) in the speech sound source room and the background noise level is 39 dB(A) in the speech sound receiving room, for a typical setup of private offices, 30% of overhead words would be intelligible when the construction of the common partition has an STC rating of around 45; only a very small percentage or none of the overheard words would be intelligible when the STC rating of the common partition is increased to 50; all speech sounds from the speech sound source room would be completely inaudible when the STC rating of the common partition is increased to 60.
Citation
Xu J.: Predicting speech intelligibility and security using artificial neural network models, CD-ROM Proceedings of the Thirtheenth International Congress on Sound and Vibration (ICSV13), July 2-6, 2006, Vienna, Austria, Eds.: Eberhardsteiner, J.; Mang, H.A.; Waubke, H., Publisher: Vienna University of Technology, Austria, ISBN: 3-9501554-5-7
|