Among the different micromechanical workings of the cochlea, frequency and intensity processing continues to provide researchers with new clues on how to best present amplification to the impaired ear. Frequency and intensity processing of the cochlea is based on the mechanics of the basilar membrane and the sensory cells of the Organ of Corti. Today’s advanced hearing instruments are continually being designed to process sounds in a manner similar to, or synergistic with, that of the cochlea. Thus, it is relevant for practitioners to understand and appreciate cochlear signal processing as it relates to hearing instrument function. This paper reviews frequency and intensity processing of the cochlea, with particular emphasis on how hearing instrument manufacturers try to replicate this processing using digital hearing instruments.

Frequency/Band Selectivity
From studies of animal and human cadavers, we know that the basilar membrane is highly frequency selective. High frequency signals stimulate the basal (bottom) end of the basilar membrane, while low frequency sounds primarily stimulate apical (top) regions. A review of the cochlear anatomy shows that the basilar membrane is narrow (low in stiffness and density) and low in elasticity at its base, which makes this area high-frequency selective. In contrast, the apical end of the basilar membrane is compliant, relatively dense and high in elasticity—qualities needed for low frequency selectivity. This tonotopic organization allows for complex filtering functions.

Much like the cochlea, frequency selectivity is fundamental to digital signal processing (DSP) hearing instruments. The basis of most DSP hearing instruments is Fast Fourier Transform (FFT). This can be described as sound being transduced from a time domain (sine waves) to a frequency-specific algorithm (frequency spectrum), allowing for frequency or band manipulation. DSP instruments recently have been introduced that feature large numbers of frequency bands and channels. A few examples include Siemens’ Signia (8 channels), Sonic Innovations’ Natura 2SE (9), GN ReSound’s Digital 5000 Series (14 bands), Unitron’s Selex (16) and Phonak’s Claro (20). These instruments illustrate what some industry observers see as a “race” by manufacturers to define and manipulate the signals within discreet bandwidths/channels using diverse processing strategies. This type of signal manipulation is reflective of the multi-band processing found in the cochlea.1

The processing bands within the cochlea have the ability to couple and de-couple, depending on the input signal. As complex sounds enter the cochlea, they create ad hoc band-pass filters with varying center frequencies. Each input signal, therefore, yields different frequency bands that cause the cochlea to reorganize its most recent band-pass filters to accommodate any new input signals. These overlapping dynamic filters may be over 100 Hz wide and exhibit center frequencies that can differ by as little as 1 Hz, revealing extreme cochlear precision in frequency selectivity.

Each cochlear band tends to be asymmetrical in design with a preference for the low frequencies. To conceptualize this, think of the basiliar membrane’s traveling wave envelope. When a signal is introduced to the cochlea, the wave starts at the basal end and travels apically until it reaches its corresponding point on the basilar membrane, then quickly dies out. At the high frequency region of the band, the basilar membrane is excited and some high frequency components may become masked (in general, there is little masking effect in the low frequency regions). This is referred to as the upward spread of masking.

Additionally, louder input signals may increase the width of a band leading to more intense masking of high frequency segments. At low input/sound levels, the basilar membrane shows remarkable fine-tuning capabilities, but as the input signal increases, it becomes more broadly tuned.2 These biomechanical factors have significant psychoacoustic ramifications, particularly in the area of speech recognition amid ambient noise.

Digital Multi-Band Noise Reduction
The unimpaired cochlea exhibits precise frequency tuning and adaptive band-pass filtering. The impaired cochlea, by comparison, is broadly tuned, making it vulnerable to environmental noise factors such as occurs in low (or negative) signal-to-noise ratios (SNR). Beyond providing fidelity, the manipulation of frequency bands in advanced digital hearing instruments is designed to reduce background noise (a process often referred to as spectral subtraction). Background noise is generally steady state (low in modulation) while speech fluctuates (high in modulation). By calculating the average fluctuation of the signal envelope per band, advanced instruments try to sort speech from noise, then reduce gain only in the frequency band(s) where the noise is detected.

This procedure becomes increasingly difficult when background noise is a collection of speech information. However, when analyzed over a period of time, a collection of speech signals is less variable in peak-to-peak modulation compared to pure speech (e.g., one talker), thus resembling the pattern of steady-state noise. The digital algorithms responsible for noise reduction have been appointed different names by various hearing instrument manufacturers. But, whatever the name, they all attempt to take advantage of the modulations and frequency bands, with the goal of enhancing speech-like signals and/or suppressing noise-like signals (e.g., background talkers).

As mentioned, the reduction in gain of steady-state noise is accomplished in the frequency band where the noise is detected. Therefore, hearing instruments with multiple numbers of bands are designed to ensure better speech audibility in the presence of noise. Single-band instruments tend to reduce gain across the entire frequency range thus adversely affecting audibility while ensuring comfort. A dual-band instrument is more likely to reduce gain in the frequency range where noise is detected. If gain is reduced in the low band, then audibility is sacrificed in the presence of noise. In cases where the processing function has been applied in an overly aggressive manner, this may lead a client to the comment, “I can hear better without the darn things.” If noise is detected by the instrument in the higher frequency band, then gain reduction can lead to problems of intelligibility in the presence of noise. In this case, soft high-frequency consonants of speech can be sacrificed, leading to the client complaint, “This thing makes everything louder except the person I want to hear.”

The importance of certain frequency bands to the perception of speech suggests that gain should not be reduced equally for all bands amid background noise. For example, the main vowel formants, which are the lowest three formants, constitute a frequency band of 500-3500 Hz, which provide energy for much of speech.3 Obviously, it would be unwise to drastically reduce gain in these areas. However, those frequency bands that contribute minimally to speech perception may receive more gain reduction than those considered important to speech understanding.

Intensity Masking
Within a given speech signal, there are some frequency components that are louder than others. Some higher amplitude components tend to mask those that are lower, leaving them unperceived to the unimpaired ear. Many digital hearing instruments may amplify these naturally unperceived components of a signal, leading to complaints that the sound is too loud and/or has poor speech clarity.

It is common for hearing care professionals to utilize prescriptive formulas in adjusting gain during the hearing instrument fitting process. During this process, the dispensing professional is often forced to make compromises, reducing gain below target in response to a client’s complaints of loudness. In certain circumstances, this complaint can stem from amplification of otherwise unperceived components of a signal. Additionally, over-amplification of the microstructures of speech may induce poor speech discrimination. Production of consonants and vowels produce high- and low-amplitude segments which may mask each other. DSP instruments, which often overamplify low-amplitude segments of speech, have the potential to reduce spectral resolution in certain situations.

The physiological explanation for this may be the broad tuning of the basilar membrane as the intensity of the input signal increases. This, by no means, suggests that sounds made inaudible by a hearing loss should not be made audible. However, those components of a signal which are naturally inaudible to the unimpaired ear should also be inaudible to the hearing-impaired ear. This issue continues to challenge digital hearing instrument manufacturers, and industry engineers are beginning to address this problem using various strategies (e.g., Phonak’s Claro is designed to eliminate what the company terms “spectrally masked artifacts” that contribute nothing to speech understanding or sound quality).

Automatic Gain Control and the Cochlea
The cochlea acts like an amplifier and tends to amplify soft-to-moderate input signals. More specifically, the outer hair cells (OHCs) are responsible for cochlear amplification, while signal specificity (clarity) is the responsibility of the inner hair cells (IHCs).4 The majority of neural fibers which communicate with the OHCs are descending (efferent) neural projections, suggesting that the OHCs serve an interactive function by receiving information from the central auditory nervous system (CANS). The flask-shaped IHCs communicate with most of the afferent (ascending) neural fibers, suggesting that the IHCs serve a transductive function by sending information to the CANS. Since the IHCs are responsible for signal specificity, it is important that they perceive soft sounds, and it is the OHCs which allow the IHCs to perceive soft sounds effectively. Therefore, OHC damage may lead to a moderate 40-60 dB sensorineural hearing loss (SNHL).

Cochlear anatomy reveals that the OHCs are embedded in the tectorial membrane.5 However, the IHCs are not embedded nor do they touch the tectorial membrane.6 In order for both types of cells to transmit electrochemical information, a sharing of the apical stereocilia must occur in the direction of the stria vascularis or away from the limbus. It is the tectorial membrane which shares the stereocilia bundles of both cells. In studying the sensory cells of the cochlea, it has been found that the OHCs are motile.7 Like muscle cells, they have the potential to shorten (pull) or elongate (push). This motile potential is due to actin protein which is also found in muscle fiber.5

When a soft signal is introduced to the cochlea, the OHCs shorten, causing a sharing of the IHCs’ stereocilia, resulting in the ability to perceive the sound.8 In this way, the OHCs act as amplifiers for soft-to-moderate sounds.

Loud sounds naturally produce pressure waves in the cochlea, which stimulate the stereocilia bundles of both sensory cells. OHCs are specially equipped to compress a 120 dB signal into the 60 dB dynamic range of the IHC.1 One theoretical view of this type of processing is that the OHCs elongate in the presence of loud signals so as to dampen or compress the sound. Thus, a SNHL resulting from damage to the OHCs may exhibit a loss in cochlear amplification and increased linearity—or lack of cochlear compression—making loud sounds much more unpleasant (i.e., recruitment).

A form of cochlear amplification and compression is attempted by digital hearing instruments through multi-band wide dynamic range compression (WDRC). Multi-band WDRC specifies the most gain for soft intensity sounds while maintaining sounds within the ear’s dynamic range. Thus, in a sense, it is an attempt to bolster or replace the job function of the outer hair cells. The disadvantage of WDRC is the over-amplification of naturally unperceived frequency components of a signal, which can lead to complaints of loudness or reduced speech clarity as discussed earlier.

Another disadvantage of WDRC and DSP processing is patient complaints of hearing noises when there are no sounds. Circuit noise from the instrument can be amplified and becomes increasingly audible in quiet situations. To rectify this issue, the compression method of expansion is currently being employed. Expansion provides less gain from signals below the compression threshold, and increases gain at, or around, the kneepoint.

The OHCs not only amplify soft-to-moderate sounds, but they also sharpen the peak of the traveling wave, adding to the frequency resolution. Early studies of the traveling wave conducted on human and animal cadavers showed a rounded peak of the traveling wave. This was due to the death of the subjects and the resulting polarization of the OHCs. Currently, some hearing instrument manufacturers are taking the first steps to sharpen the microstructures important to speech perception. For example, some DSP hearing instruments provide evanescent automatic consonant and high-frequency signal shaping in an attempt to sharpen the microstructures important to speech perception. However, caution needs to be applied to prevent over-amplification which may adversely affect speech discrimination ability.

Summary
The physiology of the basilar membrane suggests that it is a complex band-pass filter with remarkable frequency selectivity. A SNHL may cause the basilar membrane to become less sensitive, more broadly tuned and more linear in function compared to a normal basilar membrane. Therefore, it is clinically relevant to attempt to understand and restore/rectify these functions. Today’s advanced hearing instruments attempt to provide increased sensitivity through prescribed amplification, fine-tuning through frequency shaping and nonlinearity through compression. The complaint of many hearing-impaired listeners (including hearing instrument users) that they can hear but not fully understand speech may stem from the mechanical function of the IHCs.9 IHC damage may lead to lack of intelligibility, but good audibility; OHC damage may lead to both poor audibility and speech intelligibility, along with a greater susceptibility to discomfort at higher sound levels (recruitment).

Understanding cochlear signal processing in the unimpaired ear may allow the hearing industry to develop DSP strategies that compensate for lost physiological functions, bringing audiological science and amplification to a new level of proficiency. Although the hearing care field has made laudable—if not incredible—progress with advanced hearing instruments over the past decade, it remains far from the replication of acoustic signal processing with cochlear precision.

O’neil Guthrie, MS,  is a rehabilitative  audiologist with HEARx Ltd., Edison, NJ.

References
1. Edwards BW, Struck CJ, Dharan P & Hou Z: New digital processor for hearing loss compensation based on the auditory system. Hear Jour 1998; 51 (8): 38-49.

2. Kuk FK & Ludvigsen C: Hearing aid design and fitting solutions for persons with severe-to-profound losses. Hear Jour 2000; 53 (8): 29-37.

3. Kent RD: The Speech Sciences. San Diego: Singular Publishing Group Ltd, 1997.

4. Killion MC & Niquette PA: What can the puretone audiogram tell us about a patient’s SNR loss? Hear Jour 2000; 53 (3): 46-53.

5. Bess FH & Humes LE: Audiology: The Fundamentals (2nd Ed.). Baltimore: William & Wilkins, 1995.

6. Martin FN: Introduction to Audiology (4th Ed). Englewood Cliffs, NJ: Prentice Hall, 1991.

7. Gelfand SA: Essentials of Audiology. New York: Thieme Medical Publishers, Inc, 1997.

8. Venema T: Educating consumers, MDs on hearing: Some tips for effective presentations. Hear Jour 2000; 53 (7): 42-48.

9. Killion MC: SNR loss: “I can hear what people say, but I can’t understand them.” Hearing Review 1997; 4 (12): 8-14.

Correspondence can be addressed to HR or O’neil Guthrie, Hearx Ltd, 1455 Route One South, Edison, NJ 08837; email: [email protected].