Back to Basics | January 2017 Hearing Review
In the past several years the hearing aid industry has introduced some ingenious solutions to handling the “music and hearing aids” problem. Simply stated, the higher level inputs of music tend to overdrive the analog-to-digital (A/D) converter or “front end” in many hearing aids.
Typical 16-bit architectures found in many modern hearing aids usually limit the input dynamic range to 90-95 dB. This can be likened to trying to get through a low hanging doorway—unless you duck down, or somehow increase the height of the doorway, a bumped head is sure to happen. With hearing aids, this means distortion, and no amount of programming (that occurs later in the system) will improve things.
To date, there are 5 technologies that have been implemented for music with hearing aids and are clinically available in the marketplace, offering 5 different approaches:
1) Analog compressor prior to the A/D converter with digital gain after;
2) Reducing the microphone sensitivity to “fool” the A/D converter into thinking that the input was at a lower level;
3) Shifting the static 90-95 dB dynamic range upwards to 15-110 dB SPL;
4) Auto-ranging (perhaps with stacked A/D converters) to extend the input limit, and
5) Post 16-bit architecture that yields a larger dynamic range.
In each of these approaches, the higher level elements of music are digitized so that programming can be accomplished without appreciable distortion. The first two of these can be likened to ducking under a low doorway or bridge, and the later three functions can be likened to increasing the height of the so-called doorway or bridge. In some cases, hearing aid manufacturers use a combination of more than one of these approaches.
Whatever the strategy used, clinically we are seeing people who either play music, or like to listen to music, being able to enjoy and appreciate the sound—some for the first time.
Are Music-listening Solutions Equivalent?
This is similar to frequency transposing, where there are subtle but clinically important differences between frequency compression and frequency shifting. Issues arise as to how each of these algorithms may be implemented and how they may interact with the other circuitry in a hearing aid.
My clinical “gut feeling” is that each of these “music and hearing aids” approaches are without drawbacks or hidden problems, but it would be interesting to compare and contrast them using a large population of musicians and non-musicians. An issue that immediately surfaces is this: unlike frequency transposition, for music listening, one manufacturer may use one or more of the 5 approaches in their technologies. It would, therefore, be difficult to say definitively whether an advantage was due to technology #1 or technology #2 within any one hearing aid.
Perhaps the only area of difference may be in the internal noise level of the hearing aid. Hard-of-hearing clients who have relatively good hearing acuity in the lower frequency region (below 1000 Hz) may indeed be able to hear the noise floor of their hearing aids when in quiet locations.
Clinically, I allow my clients who are being fitted with amplification, and who have relatively good low frequency hearing thresholds, some time to stop and just listen in the quiet of my office. The office noise levels are fairly similar to those that a client would experience in their own home, so this is a reasonable “technology-screening strategy.”
Other than this “stop-and-listen” approach, I am not sure that we can do much more to ascertain whether one manufacturer’s strategies are any better than the next.
This article was adapted from a post on Dr Chasin’s blog at hearinghealthmatters.org.
Original citation for this article: Chasin M. Back to Basics: Music Listening and Hearing Aids. Are All Approaches the Same? Stop and Listen. Hearing Review. 2017;24(1):12.