Chair: Janet Rutledge, Northwestern University (USA)
Todd Schneider, University of Western Ontario (CANADA)
Donald G. Jamieson, University of Western Ontario (CANADA)
The accurate electro-acoustic characterization of hearing aids is important for the design, assessment and fitting of these devices. With the prevalence of modern adaptive processing strategies (e.g., level-dependent frequency response, multi-band compression etc.) it has become increasingly important to evaluate hearing aids using test stimuli that are representative of the signals a hearing aid will be expected to process (e.g., speech). Nearly all current hearing aid tests use stationary signals that can characterize only the steady-state performance of a hearing aid. Our research examines the characteristics of automatic signal processing hearing aids with natural-speech input signals that may cause the hearing aid response to time-vary. We have investigated a number of linear system identification techniques that can be used to develop time-varying models of hearing aids. Using these models, we can begin to characterize performance of hearing aids with real-world signals and explore speech-based transient distortion measures.
Douglas M. Chabries, Brigham Young University (USA)
David V. Anderson, Brigham Young University (USA)
Thomas G. Stockham Jr., Brigham Young University (USA)
Richard W. Christiansen, Brigham Young University (USA)
A model is proposed which mathematically transforms an acoustic stimulus into a form which is believed to be more nearly related to that used by the auditory cortex to interpret sound. The model is based upon recent research towards understanding the response of the human auditory system to sound stimuli. The motivation and approach in developing this model follow the philosophy pursued in the development of a similar model for the human visual system. Application of this model to the problem of hearing compensation for impaired individuals is shown to yield a bank of bandpass filters each followed by a homomorphic multiplicative AGC. Clinical tests on hearing impaired subjects suggest that this approach is superior to other compensation schemes.
David V. Anderson, Brigham Young University (USA)
Richard W. Harris, Brigham Young University (USA)
Douglas M. Chabries, Brigham Young University (USA)
A new hearing compensation algorithm based on a homomorphic multiplicative AGC (automatic gain control) is evaluated and compared against commercially available digitally programmable analog hearing aids. Both quantitative (speech recognition and threshold) and qualitative tests were used in the evaluation. The new algorithm is shown to have made significant progress in restoring normal or near normal hearing for hearing impaired individuals.
Manfred Leisenberg, University of Southampton (UK)
Institute for Sound and Vibration Research Southampton, UK- SO9 5NH, England A new speech processing concept for Cochlear Implant (CI)-systems has been developed. It is based on robust feature extraction and a neural net classifier: Feature coefficients, extracted either by relative spectral perceptual linear predictive technique or regular CI-filtering, are classified into "auditory related units". The classifier is based on an adapted self- organizing Kohonen algorithm which finds representative clusters in the input feature vector space. These clusters are closely related to the statistical distribution of the feature coefficients and represent phonetic units. Firing neural net output nodes control the systhesis of a limited "stimulus pattern alphabet". Each "letter" represents a subphoneme and is linked to a highly distinguishable complex stimulus pattern. The concept has been implemented with CINSTIM V2.0. First experimental results confirm the new CI speech processing strategy.
Brian Strope, University of California at Los Angeles (USA)
Abeer Alwan, University of California at Los Angeles (USA)
A simple structure that compensates for frequency-dependent loudness recruitment of sensorineural hearing loss is presented. The non-linear structure estimates total input signal energy, and then weights and combines the output of two parallel filters based on this energy estimation. Preliminary evaluation of this structure with noise-masked normal hearing listeners, and speech recorded in naturally noisy environments, shows a 15-20% performance increase in word recognition scores when compared to a linear structure.