A -6 dB/octave microphone, as the name suggests, is less sensitive to sound below 1000 Hz by 6 dB at 500 Hz and 12 dB at 250 Hz. These have been commercially available for decades. However, it may be time to bring them back in force. For years, the -6 dB/octave microphone has been used as a "low tech" alteration to hearing aids in order to improve the fidelity and quality of music.
The weak point of modern digital hearing aids is their analog to digital (A/D) converters—especially for the more intense components of music.1,2 Due to their design and various engineering trade-offs, the limited dynamic input range of the A/D converter at the front end of the hearing aid can be over-driven by signals exceeding the mid-90 dB SPL range.
This has nothing to do with the hearing aid software—or any other component that occurs later in the hearing aid. An A/D converter that creates distortion at the front of the hearing aid results in poor fidelity. No software changes that occur later in the system will be able to remove this front-end distortion.
Here is the short, slightly simplified story: The most intense components of speech (ie, the lower frequency vowels) do not typically exceed 85 dB SPL, so modern hearing aids can easily handle speech. The same cannot be said of music, however, where even quiet music can be in excess of 95 dB SPL. It is these more intense components of music that overdrive the A/D converter.
There are several "fixes" to counter this problem (eg, see Hockley, Bahlmann, and Chasin3), but one is simply to exchange the normal broadband microphone for a microphone that is less sensitive (by -6 dB/octave) to the more intense lower frequency inputs of music. Low frequency sound still reaches the ear, but typically in a direct route through the vent or unoccluded ear canal. If low frequency amplification is still required, software adjustments can re-establish these missing low-frequency sounds without distortion, because the low frequency additional gain is generated after the A/D converter.
This low-tech innovation "fix" is useful even for quieter music with levels around 80 dB SPL, because the crest factor of music adds an additional 20 dB or so to the microphone input (80 dB + 20 dB >> 95 dB). The crest factor is the difference between the average or RMS (root mean square) of the signal and the peak of the signal. The waveform of music is "peakier" relative to speech because of the lower level of damping inherent in many musical instruments; the crest factor for music is typically 18-20 dB whereas that for speech is assumed to be only about 12 dB.
However, maybe we are wrong; maybe the crest factor for speech is also much greater than 12 dB. It is quite possible that the hard-of-hearing person's own voice at the level of their hearing aid, with the higher SPL and perhaps higher crest factor, may also overdrive modern A/D converters. The traditional crest factor that is part of the all ANSI hearing aid reporting standards4 is based on the work of Sivian and White5 and verified by Cox et al.6 A characteristic of both of these studies is that the crest factor was measured with an analysis window of 125 msec. This makes sense in that the time constants (temporal limitations) of our auditory systems are on the order of 125 msec, so shorter temporal analysis windows wouldn't make sense. However, this is not a "temporal integration issue" with our auditory systems; this is an "input issue" to the hearing aids and has nothing to do with the characteristics of our auditory system.
Let's re-examine some crest factors of speech, using a shorter temporal analysis window than 125 msec. Recall that the crest factor is the difference between the peak and the average or RMS peak. Shorter temporal analyses than 125 msec will result in higher instantaneous peaks, with a resulting higher crest factor. A speech sample that is analyzed with a 125 msec window may indeed have a 12 dB crest factor, but the same sample, if analyzed with a 50 msec window, may result in an 16-20 dB crest factor.
Table 1 demonstrates that the crest factor for the same speech sample is actually a function of the time analysis window. Using the traditional 125 msec analysis window, the crest factor is indeed on the order of 12 dB, but it is almost 17 dB if a shorter window is used that has a better estimate of the instantaneous peak intensity component to the crest factor.
So, are we doing a disservice to hearing aid users if we don't use a microphone that is less sensitive to the intense lower frequency components of music and speech? The -6 dB/octave microphone is less sensitive for the lower frequencies of speech and music. And many of the lower frequency components are also the intense ones. The net result is that a lower sound level reaches the A/D converter. The net benefit is that the A/D converter is not as easily over-driven. There is less distortion from music than would be the case with a broadband microphone that has similar sensitivities to low frequency sound, as it does to mid and higher frequency sounds. There is also less distortion for one's own voice at the level of their hearing aids.
The above is a simplified explanation why "less may be more" when it comes to more intense inputs to a hearing aid such as music. There are other technical considerations, but this is the essence of the question: "How can we reduce the sound level so that it won't cause the A/D converter to distort?"
A -6 dB/octave microphone has been in use specifically for music, but maybe it should also be used for all hearing aids where the hard-of-hearing consumer only requires 30 dB or less gain for the lower frequency region. If a person requires significant low frequency gain, then some or possibly all of this gain can be replaced by "digital gain" after the A/D converter via software adjustments. To fully compensate, one would need to add 6 dB to the required gain at 500 Hz and 12 dB to the required gain at 250 Hz.
A person's own voice is about 65 dB SPL at 1 meter (3 feet), but about 85 dB SPL at the level of the person's own ear. The hearing aid microphone doesn't know whether the sound is intense from a large distance, or "average" from about 6 inches. All of the vowels in a person's speech have their first formant at or below 500 Hz. High vowels ([i] and [u]) have a very low frequency first formant (<250 Hz), and low vowels (eg, [a]) have their first formant in the 400-500 Hz region. All vowels (and all sonorants) have their first formant at or below 500 Hz, and the first formant is the most intense element in the sound.
Even speech can have instantaneous peaks of about 17 dB above their RMS or average value, if the analysis window is short enough (see Table 1). Adding an RMS value of 80-85 dB SPL to an instantaneous peak that can be 17 dB greater will result in levels that are in excess of 100 dB SPL—more than enough to overdrive the A/D converter. So, it's not only music that is intense, it's also a person's own voice at the level of their own hearing aid that is intense as well. Hard-of-hearing people just cannot hear their own voice with the fidelity that they should be receiving.
But why aren't all hearing aid manufacturers flocking to install -6 dB/octave microphones in their devices? Figure 1 from Chasin and Schmidt7 shows the internal noise for three conditions: the red line in Figure 1 is the internal noise spectrum with a broadband microphone, and the violet line is the spectrum with a -6 dB/octave microphone. In my view, this increase in internal noise level has erroneously scared off some designers in the hearing aid industry. The black line shows a significantly reduced internal noise spectrum of a -6 dB/octave microphone if expansion is set to maximum. The trade-off effect of low-level expansion is that it reduces amplification for soft sounds in the given frequency region that it is applied to. Depending on the details of the LLE (low level expansion) settings, less gain for soft sounds would be applied for the low frequency region. Arguments against a -6 dB/octave microphone are therefore weak if those arguments are based on internal noise levels.
The benefits of all hearing aid microphones (for those who require 30 dB or less gain in the lower frequency region) are evident. These hard-of-hearing consumers will still obtain all of the amplification they require (where any missing low frequency amplification can be replaced by software adjustments after the A/D converter), will have an increased headroom for the lower frequency vowel (and other sonorant) components of their own speech, and will be able to have increased appreciation for the music that they listen to and play.
For those people with a moderate or more severe loss in the low frequencies, a broadband microphone may be useful since they do require substantial gain that may not be able to be replaced by software adjustments. However, for those with less than a 60 dB HL hearing loss at 250-500 Hz, a -6 dB/octave microphone may be the way to go.
The author thanks Steve Armstrong and Betty Rule for discussions that led to this article.
CORRESPONDENCE can be addressed to: