Signal processing algorithms for digital hearing aids

  1. Álvarez Pérez, Lorena
Supervised by:
  1. Enrique Alexandre Cortizo Director

Defence university: Universidad de Alcalá

Fecha de defensa: 09 March 2012

Committee:
  1. Manuel Rosa Zurera Chair
  2. Lucas Cuadra Rodriguez Secretary
  3. Antonio Pena Giménez Committee member
  4. Máximo Cobos Serrano Committee member
  5. Aníbal João de Sousa Ferreira Committee member

Type: Thesis

Abstract

Hearing loss is a problem that severely affects the speech communication and disqualify most hearing-impaired people from holding a normal life. Although the vast majority of hearing loss cases could be corrected by using hearing aids, however, only a scarce of hearing-impaired people who could be benefited from hearing aids purchase one. This irregular use of hearing aids arises from the existence of a problem that, to date, has not been solved effectively and comfortably: the automatic adaptation of the hearing aid to the changing acoustic environment that surrounds its user. There are two approaches aiming to comply with it. On the one hand, the "manual" approach, in which the user has to identify the acoustic situation and choose the adequate amplification program has been found to be very uncomfortable. The second approach requires to include an automatic program selection within the hearing aid. This latter approach is deemed very useful by most hearing aid users, even if its performance is not completely perfect. Although the necessity of the aforementioned sound classification system seems to be clear, its implementation is a very difficult matter. The development of an automatic sound classification system in a digital hearing aid is a challenging goal because of the inherent limitations of the Digital Signal Processor (DSP) the hearing aid is based on. The underlying reason is that most digital hearing aids have very strong constraints in terms of computational capacity, memory and battery, which seriously limit the implementation of advanced algorithms in them. With this in mind, this thesis focuses on the design and implementation of a prototype for a digital hearing aid able to automatically classify the acoustic environments hearing aid users daily face on and select the amplification program that is best adapted to such environment aiming at enhancing the speech intelligibility perceived by the user. The most important contribution of this thesis is the implementation of a prototype for a digital hearing aid that automatically classifies the acoustic environment surrounding its user and selects the most appropriate amplification program for such environment, aiming at enhancing the sound quality perceived by the user. The thesis can be divided into three major parts. The first one is related to the design of an automatic sound classification system that allows to properly discriminate the input sound signal into speech, music and noise (the acoustic environments considered in this thesis). Note that not only this part involve the selection of the best suited feature set, but also the selection of the most appropriate classifying algorithm and the optimization of its parameters for its further implementation in the DSP. The second part deals with the design of an approach that aims at enhancing speech in hearing aids, not only in terms of speech intelligibility but also in terms of speech quality. Finally, the third part, probably the most important from the practical point of view, describes in detail the way both the automatic sound classification system and the speech enhancement approach are implemented in the DSP used to carry out the experiments. The main contributions of this thesis are listed below: • The design of a set consisting of low-complexity features. The key advantage regarding this feature set consists in that the number of instructions demanded by the DSP for its computation is extremely low. • A feature-selection approach for sound classification in hearing aids through restricted search driven by genetic algorithms. • A combined growing-pruning method for multilayer perceptrons (MLPs) that aims at finding the most appropriate number of hidden neurons in MLPs for a particular classification task. • An algorithm for automatically selecting, among some piecewise linear approximations, the “approximated” activation function best suited for each of the hidden and output neurons comprising a multilayer perceptron. • The design of a gain function aiming at speech enhancement in hearing aids, not only in terms of speech quality but also in terms of speech intelligibility. This gain function is created by using a gaussian mixture model fueled by a genetic algorithm. • An approach that aims at “simplifying” the implementation of the compressorexpander algorithm (the very core of a hearing aid) in the DSP. The practical implementation of this approach consists in storing in the data-memory of the DSP, a table containing “tabulated” values of the gain to be applied as a function of both the input signal level (dB SPL) and the frequency band. The final, global conclusion is that we have implemented a prototype for a digital hearing aid that automatically classifies the acoustic environment surrounding its user and selects the most appropriate amplification program for such environment, aiming at enhancing the sound quality perceived by the user. The battery life of this hearing aid is 140 hours (or equivalently, approximately 6 days), which has been found to be very similar to that of hearing aids in the market, and what is of key importance, there is still about 30 % of the DSP resources available for implementing other algorithms, such as, for instance, those involved in sound source separation or acoustic feedback reduction.