Reverberation is a well-studied acoustical property that often complicates hearing aided listening. It is possible that processing properties within many nonlinear hearing aids may be contributing to the difficulties encountered by hearing aid wearers. An examination of this premise and the development of an alternative processing approach designed to mitigate many of the difficulties of reverberation are discussed in this article.

The Basics of Reverberation
It is fairly well understood that reverberation, or the acoustical decay of a sound, can have a negative impact on speech intelligibility. Conventionally, reverberation time is expressed as the time required for a signal’s magnitude to decrease by 60 dB. This is designated the RT60 and is usually expressed in seconds or fractions of a second.

Some of the effects of reverberation on speech identified by Assman & Summerfield1 include 1) smearing and prolonging of spectral change cues, such as formant transitions; 2) smoothing out of the waveform envelop, and 3) increase of the low-frequency energy with greater resultant masking of the higher frequencies. Final consonants are particularly susceptible to interference.2,3 Stop consonants are also vulnerable to filling in the silent gaps during stop closure.

Reverberation also obscures information pertinent to rates of spectral change.4 This causes confusion between stop consonants and semi-vowels. Sussman et al.5 also noted how reverberation interferes with second formant transition perception as these parts of speech tend to persist unnaturally. Place of articulation errors are often the observed consequence of this type of interference from reverberation.

Generally, the effects of reverberation on speech intelligibility are quite complex and not well described by spectral-based analysis methods such as the Articulation Index. This is partly due to the fact that, for normal-hearing listeners at least, binaural “de-verberation” processes come into play. On the other hand, the effect of reverberation and noise combined is considerably worse than noise alone. This is fairly well-modeled by the Articulation Index.3,6 Extensive evidence indicates that interference of echoes on target speech sounds is considerably worse for the typical hearing-impaired listener.7-9 Hence, it would be of particular importance to ensure that a hearing aid’s signal processing operations do not impose further complications to an already acoustically troublesome situation.

figureFigure 1. Illustration of the effect of three different reverberation times on the softer final consonant in the word "Back." Only in the shortest time (250 milliseconds) does the vowel portion of the utterance decay sufficiently so as not to mask the "k" sound.

A simple illustration of one of the principles of reverberation interference on a speech sequence is shown in Figure 1. It is based on an example developed by Everett.10 A single spoken word, “back,” is shown with the consequence of three different reverberation times. The rising sound pressure of the [bæ] component peaks about 320 milliseconds before the [k] sound which trails by about 100 milliseconds. The peak amplitude of the final [k] is 25 dB less than the [bæ] in this representation. If the reverberation time of the room is about 250 milliseconds, the initial and more intense [bæ] will have decayed sufficiently below the peak energy of the [k] so as not to cause masking. However, if the room reverberation time is longer, as in the lines representing decays of 500 milliseconds and 1 second, masking of the trailing final consonant is almost certainly to occur. This is a fairly simple case of aggravated forward masking.

For such long reverberation times, as is often found in auditoriums and large public spaces, the “bounce” of reflected sound is well known to compromise speech clarity for most listeners. But what about in the “softer” and smaller volume acoustical conditions which more typically confront hearing aid users at home or in the office? Is reverberation a problem for hearing aids when the RT-60 is as low as the 250 millisecond line drawn in Figure 1? The answer may well depend on the particular processing properties of the hearing aid amplifiers and the prospect for complications from standing waves which are not necessarily revealed by conventional RT-60 values.

Examining Reverberation Problems in AGC Processing
Consider a hearing aid with automatic gain control (AGC) with a relatively low threshold of compression and a fast release time. By design, greater gain is applied to softer sounds than to sounds with higher sound pressure levels. Lower level reflections, then, could in principle receive proportionately more gain than the leading (original) sounds. A long release time may mitigate this to some extent, but some might argue that the potential improvements of vowel-to-consonant ratio might be traded away in such a case. Multiple time constants have sometimes been implemented to reduce some of these effects, but not all hearing aids have such designs.

For purposes of this investigation, a digital compression hearing aid of a two-channel design was evaluated in contrast to a new approach that specifically attempts to reduce the effects of reverberation interference in a typical living room. A development platform in use at Acoustimed (Pty) Ltd was configured to make comparisons between a hearing aid configured in a standard, commercially available, compression approach and a proprietary configuration.

The standard compression set-up was configured using the FIG611 prescription for a flat hearing loss of 60 dB. For analysis purposes, the frequency response was adjusted to be flat (within 3 dB) for a 65 dB input from 100 Hz to 5 kHz. Expansion was set at 30 dB to reduce the low level ambient noise. Other details are as follows: crossover frequency was located at 1.6kHz; the low-frequency channel compression threshold (CT) was set to 40 dB, and the compression ratio (CR) was 2.5:1. In the high-frequency channel, the CT was also set at 40 dB, and the CR was 3.6:1. A fast attack and release time for both channels was enabled (32 milliseconds) for fast (transient) input signals; a slow attack and release time in both channels was enabled (256 milliseconds) for less rapid input changes. The slow release time was implemented to avoid “pumping” and to preserve the relative intensity of the vowels. These time constants are within the operating range of many commercial hearing aids.

Samples of connected discourse were presented to the hearing aid attached to a 2cc coupler. The speaker was located at a distance of 75 cm from the hearing aid microphone, and the average speech level adjusted to 65 dB SPL. The room was a standard office with a computer fan operating. The measured ambient noise at the location of the hearing aid was 45 dBA.

An alternative hearing aid design was programmed into the same hearing aid with the specific objective of reducing low-level reverberation. The basic operating properties were set to match the standard hearing aid as much as possible. However, a key difference was that the expansion properties were set to a threshold of 50 dB input level rather than the 30 dB of the more standard approach. Furthermore, the amplifier remained linear through a range of inputs that would include most speech sounds, up to 90 dB. The expansion time constants were 16 milliseconds for attack and release.

figureFigure 2. Input/Output curve (in SPL) for the standard compression approach (red), the reverberation control hearing aid (blue), and a linear hearing aid (green with dots).

Figure 2 shows an Input/Output curve (400 Hz) for the compression hearing aid (red) and for the “reverberation control” hearing aid (blue). The dotted green line provides a linear reference up to 90 dB with no further output allowed. The operational differences are perhaps more clearly illustrated by the Input/Gain curve shown for the same frequency and input range in Figure 3.

figureFigure 3. Input/Gain curve. Standard compression approach (red), the reverberation control hearing aid (blue), and a linear hearing aid (dotted green).

A hearing aid that operates linearly through the input range of 50 dB-90 dB is noteworthy for its non-conformity to the standard assumptions and approaches that use Wide Dynamic Range Compression (WDRC). WDRC attempts to both maximize audibility and, to some extent, “normalize” loudness. However, the possibility for introducing greater audibility of ambient noise has encouraged the introduction of a variety of “noise reduction” algorithms.12 In general, these algorithms are programmed to attenuate steady-state types of noise, but may introduce artifacts themselves as the attenuation is suddenly relaxed in response to the detection of speech.

Initial Trial Fittings
An initial group of 12 experienced hearing aid users who had been previously fit with the WDRC approach had their digital hearing aids re-programmed to the Reverberation Control settings. Relative to fine-tuning the aids, users comments took the clinicians by surprise as they strongly insisted on “not changing anything” in 11 cases. The 12th only requested a small high frequency gain change. Follow-up interviews supported the continued perceived improvement in the more complex acoustics of everyday use.

Figure 4. Four spectrograms of a 20-second passage of a male talker (Prince Phillip). Sample A is the original studio recording. Sample B is the same passage replayed and recorded in a “quiet” office environment at a distance of 75 cm. Low level noise and reverberation clearly fill in the inter-syllabic spaces. Sample C is the output from the standard compression hearing aid with speaker at the same distance and level as in Sample B (average speech level was 65 dB). In this sample, the hearing aid shows some improvement of the high frequency content, but the spaces between syllables still contain considerable noise and reverberation content. Finally, Sample D shows the output for the reverberation control hearing aid for the same conditions as in Samples B and C. A considerable recovery of the inter-syllabic spaces is evident.

Obviously systematic performance measures are required and such studies are underway. However, the informal clinical observations and interviews with early users suggest that the approach may provide a kind of phonemic edge enhancement13,14 which allows for greater clarity of the co-articulated speech components. This speculation relates to the considerable reduction of low-level noise, especially reverberation in typical office and living room environments. This can be seen quite clearly in the spectrographic samples of connected discourse that are shown in Figure 4. The reader is also invited to listen to the sound samples from which these spectrographs were made by visiting the Acoustimed Website at www.acoustimed.co.za/EchoStop.htm.

The 20-second sample shown in Figure 4 is a passage of a studio recording of Prince Phillip, Duke of Edinburgh, in a commentary on economics. Sample A is the original; Sample B is a recording in the semi-reverberant room at a distance of 75 cm. The average speech level was adjusted to 65 dB. Noise and reverberation are clearly filling in the inter-syllabic gaps. In Sample C, the standard compression aid was presented at the same distance and level as in Sample B. The noise between the syllables is quite prominent. Finally, Sample D shows how closely the Reverberation Control hearing aid resembles the original recording, providing greater separation between syllables and clearly reducing the noise between the phonemic elements.

figureFigure 5. The relative difference in the ambient noise for the standard compression hearing aid and the Reverberation Control aid. The 0 line represents the noise in the room without either aid. Differences of nearly 20 dB are evident between the two hearing aids.

As a means of quantifying the extent to which the noise was reduced, a spectral analysis of the two hearing aids was conducted and referenced to the 45 dB room noise. This is shown in Figure 5 where the 0 dB reference line is the noise at the coupler microphone without any hearing aid and the coupler removed.

figureFigure 6. Difference between a 100 millisecond white noise burst measured at the speaker (left) and the same signal measured 75 cm from the speaker (right).

More on Room Acoustics
Another often overlooked aspect in the acoustics of a “typical” office or living room is the possibility of standing wave complications. Generally, standing waves are associated with tones and the local drop-outs and increases that might occur as reflections interfere with source signals. But when one considers that the low frequency energy of the fundamental frequency of a voice has tonal properties, the possibility of standing waves shouldn’t be dismissed. A simple comparison of a 100 millisecond white noise burst both at the speaker and at the 75 cm location used for these hearing aid measures is provided in Figures 6. On this basis, the room’s RT60 can be calculated at 340 milliseconds.

figureFigure 7. Difference between a 200 Hz tone burst (left) and the same signal recorded 75 cm from the speaker (right). In this case, standing waves contributed to increase the reverberation time by over 2.5 times.

This is in marked contrast to Figure 7 where a 200 Hz puretone burst was used. In this case, the contribution of standing waves contributed to increase the reverberation time on the order of over 2.5 times. Here, the greatest amplitude of the sound at the microphone is occurring after the signal has desisted! The best estimate of the time it takes for this tone burst to decay by 60 dB is over 900 milliseconds. Similar analysis of the word “back” clearly showed a tendency for much longer “effective reverberation time” than the nominal, and more standardized, RT60 assigned to the room. This is presumably attributable to the “tonal” property of the voiced portions of the word and the presence of standing wave complications.

Summary and Discussion
There are multiple diverse factors related to the significance of this work. Sensorineural hearing loss is often complicated by a widening of tuning curves that contribute to upward spread of masking problems, and temporal distortions15 that may aggravate the forward masking aspects of reverberation. On the other hand, binaural processes are commonly understood to work to reduce the effects of reverberation for normal-hearing listeners (at least). But, of course, hearing aids are decidedly “uncorrelated” amplifier devices and it is, at best, unclear how they support or interfere with natural binaural processes.16

Reverberation is also apparently more daunting than simple analysis seems to suggest. Even within relatively common-sized rooms, acoustics are likely to be a complicating factor that reduce speech clarity in an otherwise appropriately prescribed amplification scheme. This might relate to why many hearing aid wearers report superior TV listening in their living rooms with directly coupled audio listening devices than with their own advanced digital hearing aids.

The approach detailed in this report involves altering the input/output properties of a digital 2-channel hearing aid platform to provide substantially more expansion than is commonly used, and a linear operating mode through the speech frequencies. Acoustical analysis of the standard compression approach and reverberation control method indicates a 50% reduction in reverberation and a nearly 20 dB reduction in ambient noise.

Initial fittings with this design have been highly favorable in terms of consumer acceptance. The approach has a serendipitous and significant reduction in the tendency for acoustic feedback, since gain is reduced during low input intervals. Hence, larger vents can be used in the fittings.

This article was submitted to HR by H. Christopher Schweitzer, PhD, an audiologist and president of HEAR 4U International, Lafayette, Colo, and Desmond A. Smith, a research scientist for Acoustimed (Pty) Ltd, Johannesburg, South Africa. Correspondence can be addressed to H. Christopher Schweitzer, PhD, HEAR 4U International, 2505 Ginny Way, LaFayette, CO 80026; email: [email protected],  or Desmond A. Smith at [email protected].

References
1. Assman P, Summerfield Q. The perception of speech under adverse acoustic conditions. In: Greenberg W, Ainsworth, R. Popper, R. Fay (eds). Speech Processing in the Auditory System. Springer Handbook of Auditory Research, Vol. 18 (in press).
2. Gelfand S, Silman S. Effects of small room reverberation on the recognition of some consonant features. J Acoust Soc Am. 1979;66:296-306.
3. Nabelek A, Letowski T, Tucker F. Reverberant overlap and self masking in consonant identification. J Acous Soc Am. 1989; 86: 1259-1265.
4. Nabelek A. Identification of vowels in quiet, noise, and reverberation: relationships with age and hearing loss. J Acous Soc Am. 1988;84 476-484.
5. Sussman H, McCaffrey H, Matthews S. An investigation of locus equations as a source of relational invariance for stop-place categorization. J Acous Soc Am. 1991; 90:1309-1325.
6. Helfer K. Binaural cues and consonant perception in reverberation and noise. J Speech Hear Res. 1994;35:1394-1401.
7. Finitzo-Hieber T, Tillman T. Room acoustics effects on monosyllabic word discrimination ability for impaired and normal hearing. J. Speech Hear Res. 1978;21:440-458.
8. Duquesnoy A, Plomp R. The effect of a hearing aid on the speech-reception threshold of a hearing-impaired listener in quiet and noise. J Acoust Soc. Am. 1983;18:435-4411.
9. Humes L, Dirks D, Bell T, Ahlstrom C, Kincaid G. Application of the Articulatin Index and the Speech Transmission Index to the recognition of speech by normal-hearing and hearing-impaired listeners. J Speech Hear Res. 1986;29:447-462.
10. Everett FA. The Master Handbook of Acoustics, 2nd ed. Blue Ridge Summit, Pa: TAB books; 1989.
11. Killion M, Fikret-Pasa S. The 3 types of sensori-neural hearing loss: loudness and intelligibility considerations. Hear Jour. 1993;46(11):31-36.
12. Dillon H. Hearing Aids. New York: Thieme Medical Publishers;2001:184.
13. Mortz M, Schweitzer C. Signal processing for phonetic “edge enhancement” for otopathologic listeners. Paper presented at: NIH/VA Forum on Hearing Aid R&D; 1995; Bethesda, MD.
14. Mortz M, Schweitzer C, Terry M. Temporal amplitude processing for phonetic “edge enhancement” on otopathologic listeners. Paper presented at: Acoustical Society of America meeting; 1995; St. Louis.
15. Moore B. Perceptual consequences of cochlear hearing loss and their implications for the design of hearing aids. Ear Hear. 1996;17:133-161.
16. Schweitzer C. Prospects for beamforming in hearing instruments. Hearing Review. 2000; 7(5):8-16.