The crest factor measurement of 12 dB for speech, based on the traditional analysis window of 125 ms, is erroneous if we are talking about the real crest factor as an input for a hearing aid. For those people with a moderate or more severe loss in the low frequencies, a broadband microphone may be useful because they do require substantial gain that may not be able to be replaced by software adjustments. However, for those with less than a 60 dB HL hearing loss at 250-500 Hz, a -6 dB-per-octave microphone may be the way to go.

Marshall Chasin, AuD, MSc, is an audiologist and director of research at the Musicians’ Clinics of Canada, Toronto. He has authored several books, including Musicians and the Prevention of Hearing Loss (Singular Publishing), and serves on the editorial advisory board of HR. He has also guest-edited two recent special editions of HR on music and hearing loss (March 2006 and February 2009, the latter with Larry Revit, MA).

A -6 dB/octave microphone, as the name suggests, is less sensitive to sound below 1000 Hz by 6 dB at 500 Hz and 12 dB at 250 Hz. These have been commercially available for decades. However, it may be time to bring them back in force. For years, the -6 dB/octave microphone has been used as a “low tech” alteration to hearing aids in order to improve the fidelity and quality of music.

The weak point of modern digital hearing aids is their analog to digital (A/D) converters—especially for the more intense components of music.1,2 Due to their design and various engineering trade-offs, the limited dynamic input range of the A/D converter at the front end of the hearing aid can be over-driven by signals exceeding the mid-90 dB SPL range.

This has nothing to do with the hearing aid software—or any other component that occurs later in the hearing aid. An A/D converter that creates distortion at the front of the hearing aid results in poor fidelity. No software changes that occur later in the system will be able to remove this front-end distortion.

Here is the short, slightly simplified story: The most intense components of speech (ie, the lower frequency vowels) do not typically exceed 85 dB SPL, so modern hearing aids can easily handle speech. The same cannot be said of music, however, where even quiet music can be in excess of 95 dB SPL. It is these more intense components of music that overdrive the A/D converter.

Table 1. For the same speech sample, the difference between the RMS of the signal and its peak is shown. For shorter analysis windows, the instantaneous peak is higher than for longer windows of analysis with a resulting higher crest factor.

There are several “fixes” to counter this problem (eg, see Hockley, Bahlmann, and Chasin3), but one is simply to exchange the normal broadband microphone for a microphone that is less sensitive (by -6 dB/octave) to the more intense lower frequency inputs of music. Low frequency sound still reaches the ear, but typically in a direct route through the vent or unoccluded ear canal. If low frequency amplification is still required, software adjustments can re-establish these missing low-frequency sounds without distortion, because the low frequency additional gain is generated after the A/D converter.

Figure 1. One-third octave internal noise measure for a given fitting: red = broadband microphone; violet = -6 dB high frequency (HF) emphasis microphone; and black = high frequency emphasis microphone system plus adjusted low-level expansion thresholds or noise compensation.

This low-tech innovation “fix” is useful even for quieter music with levels around 80 dB SPL, because the crest factor of music adds an additional 20 dB or so to the microphone input (80 dB + 20 dB >> 95 dB). The crest factor is the difference between the average or RMS (root mean square) of the signal and the peak of the signal. The waveform of music is “peakier” relative to speech because of the lower level of damping inherent in many musical instruments; the crest factor for music is typically 18-20 dB whereas that for speech is assumed to be only about 12 dB.

However, maybe we are wrong; maybe the crest factor for speech is also much greater than 12 dB. It is quite possible that the hard-of-hearing person’s own voice at the level of their hearing aid, with the higher SPL and perhaps higher crest factor, may also overdrive modern A/D converters. The traditional crest factor that is part of the all ANSI hearing aid reporting standards4 is based on the work of Sivian and White5 and verified by Cox et al.6 A characteristic of both of these studies is that the crest factor was measured with an analysis window of 125 msec. This makes sense in that the time constants (temporal limitations) of our auditory systems are on the order of 125 msec, so shorter temporal analysis windows wouldn’t make sense. However, this is not a “temporal integration issue” with our auditory systems; this is an “input issue” to the hearing aids and has nothing to do with the characteristics of our auditory system.

Crest Factors: Is Less More?

Let’s re-examine some crest factors of speech, using a shorter temporal analysis window than 125 msec. Recall that the crest factor is the difference between the peak and the average or RMS peak. Shorter temporal analyses than 125 msec will result in higher instantaneous peaks, with a resulting higher crest factor. A speech sample that is analyzed with a 125 msec window may indeed have a 12 dB crest factor, but the same sample, if analyzed with a 50 msec window, may result in an 16-20 dB crest factor.

Table 1 demonstrates that the crest factor for the same speech sample is actually a function of the time analysis window. Using the traditional 125 msec analysis window, the crest factor is indeed on the order of 12 dB, but it is almost 17 dB if a shorter window is used that has a better estimate of the instantaneous peak intensity component to the crest factor.

So, are we doing a disservice to hearing aid users if we don’t use a microphone that is less sensitive to the intense lower frequency components of music and speech? The -6 dB/octave microphone is less sensitive for the lower frequencies of speech and music. And many of the lower frequency components are also the intense ones. The net result is that a lower sound level reaches the A/D converter. The net benefit is that the A/D converter is not as easily over-driven. There is less distortion from music than would be the case with a broadband microphone that has similar sensitivities to low frequency sound, as it does to mid and higher frequency sounds. There is also less distortion for one’s own voice at the level of their hearing aids.

The above is a simplified explanation why “less may be more” when it comes to more intense inputs to a hearing aid such as music. There are other technical considerations, but this is the essence of the question: “How can we reduce the sound level so that it won’t cause the A/D converter to distort?”

A -6 dB/octave microphone has been in use specifically for music, but maybe it should also be used for all hearing aids where the hard-of-hearing consumer only requires 30 dB or less gain for the lower frequency region. If a person requires significant low frequency gain, then some or possibly all of this gain can be replaced by “digital gain” after the A/D converter via software adjustments. To fully compensate, one would need to add 6 dB to the required gain at 500 Hz and 12 dB to the required gain at 250 Hz.

A person’s own voice is about 65 dB SPL at 1 meter (3 feet), but about 85 dB SPL at the level of the person’s own ear. The hearing aid microphone doesn’t know whether the sound is intense from a large distance, or “average” from about 6 inches. All of the vowels in a person’s speech have their first formant at or below 500 Hz. High vowels ([i] and [u]) have a very low frequency first formant (<250 Hz), and low vowels (eg, [a]) have their first formant in the 400-500 Hz region. All vowels (and all sonorants) have their first formant at or below 500 Hz, and the first formant is the most intense element in the sound.

Even speech can have instantaneous peaks of about 17 dB above their RMS or average value, if the analysis window is short enough (see Table 1). Adding an RMS value of 80-85 dB SPL to an instantaneous peak that can be 17 dB greater will result in levels that are in excess of 100 dB SPL—more than enough to overdrive the A/D converter. So, it’s not only music that is intense, it’s also a person’s own voice at the level of their own hearing aid that is intense as well. Hard-of-hearing people just cannot hear their own voice with the fidelity that they should be receiving.

So…What’s the Catch?

But why aren’t all hearing aid manufacturers flocking to install -6 dB/octave microphones in their devices? Figure 1 from Chasin and Schmidt7 shows the internal noise for three conditions: the red line in Figure 1 is the internal noise spectrum with a broadband microphone, and the violet line is the spectrum with a -6 dB/octave microphone. In my view, this increase in internal noise level has erroneously scared off some designers in the hearing aid industry. The black line shows a significantly reduced internal noise spectrum of a -6 dB/octave microphone if expansion is set to maximum. The trade-off effect of low-level expansion is that it reduces amplification for soft sounds in the given frequency region that it is applied to. Depending on the details of the LLE (low level expansion) settings, less gain for soft sounds would be applied for the low frequency region. Arguments against a -6 dB/octave microphone are therefore weak if those arguments are based on internal noise levels.

The benefits of all hearing aid microphones (for those who require 30 dB or less gain in the lower frequency region) are evident. These hard-of-hearing consumers will still obtain all of the amplification they require (where any missing low frequency amplification can be replaced by software adjustments after the A/D converter), will have an increased headroom for the lower frequency vowel (and other sonorant) components of their own speech, and will be able to have increased appreciation for the music that they listen to and play.

For those people with a moderate or more severe loss in the low frequencies, a broadband microphone may be useful since they do require substantial gain that may not be able to be replaced by software adjustments. However, for those with less than a 60 dB HL hearing loss at 250-500 Hz, a -6 dB/octave microphone may be the way to go.

Acknowledgement

The author thanks Steve Armstrong and Betty Rule for discussions that led to this article.


CORRESPONDENCE can be addressed to:

References
  1. Chasin M, Russo F. Hearing aids and music. Trends Amplif. 2004;8(2):35-48.
  2. Killion MC. What special hearing aid properties do performing musicians require? Hearing Review. 2009;16(2):20-31.
  3. Hockley NS, Bahlmann F, Chasin M. Hearing instruments to enjoy live music. Hear Jour. 2010;63(9):30-38.
  4. American National Standards Institute (ANSI). American National Standard Specification of Hearing Aid Characteristics. ANSI S3.22-2003. New York: ANSI; 2003.
  5. Sivian LJ, White SD. On minimum audible sound fields. J Acoust Soc Am. 1993;4:288-321.
  6. Cox RM, Mateisch JS, Moore JN. Distribution of short-term RMS levels in conversational speech. J Acoust Soc Am. 1988;84:1100-1104.
  7. Chasin M, Schmidt M. The use of a high frequency emphasis microphone for musicians. Hearing Review. 2009;16(2):32-37.

Citation for this article:

Chasin M. Should All Hearing Aids Have a -6 dB-per-octave Microphone? Hearing Review. 2012;19(10):56-58.