A different approach to practicing music for those with a hearing loss | Hearing Review August 2014
Audio producers and engineers seek SNR optimization, have the luxury to use audio equipment with exceptionally high specifications running on AC power, and avoid extensive signal processing. This provides an amplified sound signal with stunningly realistic fidelity. Practical high-quality hearing aid systems designed for musicians with hearing loss might find ways to incorporate aspects of this approach.
By Richard Einhorn
If you are a good musician, it is critically important to hear your instrument well in order to shape your performance with precision. If you have a moderate or severe hearing loss, some kind of electronic amplification will be almost always necessary.
My friend Charles Mokotoff—a wonderful classical guitarist who unfortunately developed a very serious hearing loss (Figure 1)—recently asked me, “If you listen to yourself play through the carefully configured music program of your hearing aids, is the sound you get closer to reality than listening with high-quality audio equipment?”
While pondering his question, I wondered: Exactly what “high-quality audio equipment” could Charles use in order to hear his guitar? I conjured a thought experiment that might provide some insight into the different ways that hearing loss and audio professionals use very similar technologies in their work.
To keep things simple, I decided to concentrate on how a musician with hearing loss might use professional-level audio gear simply to practice. To augment my own background in audio, I sought technical advice from multiple successful classical music producers and engineers, each of whom has worked on hundreds of recordings, including classical guitar.
Of course, each hearing loss is different, and some problems—such as a loss of frequency discrimination that will interfere with the perception of melody and harmony—cannot be adequately compensated for with modern technology. Fortunately, Charles discriminates pitch well in both ears, so my colleagues and I decided to specify a practice system similar in approach to the way producers and engineers record music. We would provide our guitarist with extremely high-quality amplification by using superb equipment and ensure he heard his instrument clearly by optimizing the signal-to-noise ratio (SNR).
What Do Music Producers and Engineers Do?
Although roles often overlap, record producers typically supervise the entire recording process while recording engineers set up and operate the audio equipment.
The best producers and engineers are often excellent musicians themselves, trained to analyze what they hear at a very fine level of detail. Producers like David Frost, a 14-time Grammy winner who records Riccardo Muti and the Chicago Symphony, must be able to rapidly identify a softly played wrong note or the most minor tuning problem. Engineers like Tim Martyn (four Grammies/Boston Symphony) and Tom Lazarus (nine Grammies/Yo-Yo Ma) use their ears to detect and eliminate even the slightest hint of distortion. These professionals go to considerable expense and effort to use world-class equipment and carefully honed techniques to make the best-sounding recordings possible.
Additionally, because their reputations depend on extremely acute listening abilities, top producers and engineers take their hearing very seriously; many use earplugs in loud environments like subways or noisy parties and get annual hearing tests.
Similar Technology, Different Approach
While both hearing loss and audio recording technology is similar—both fields use microphones, amplifiers, digital signal processing (DSP), and loudspeakers (receivers)—there are substantial differences in how this equipment gets used.
Hearing aids focus on those frequency bands crucial for speech comprehension. Modern hearing aids are worn on (or in) the ear and include small, often directional, microphones. Hearing instruments use sophisticated DSP algorithms to significantly alter the sound in an effort to compensate for hearing loss and, to some extent, for less-than-acoustically-optimal mic placement in noisy situations.1 The hearing professional selects a fitting method for the patient’s specific hearing loss,2 solicits feedback from the patient, and fine-tunes the sound of the hearing aids. In my experience, the audiologists and dispensing professionals do not rely on their own hearing to make adjustments.
By contrast, audio recording technology does not necessarily seek to optimize speech perception. Instead, the ideal objective is to accurately reproduce the entire audio spectrum (20 Hz to 20 kHz). Unlike many real-life situations where hearing aids are used, neither the mics nor the sound sources move around very much during recording sessions. Audio professionals place mics wherever they believe the optimal balance between the instrument’s sound and the ambience of the room can be found. In classical and jazz recordings, producers and engineers try to minimize audible alterations to the natural sound while DSP is typically adjusted not by algorithm, but by ear.
A Music Practice System for Charles
For our thought experiment, my colleagues and I agreed it was important to optimize the SNR through the use of cardioid (directional) mics and high-quality audio technology. Much of the basic equipment and signal flow of the proposed setup will be familiar to hearing care professionals (Figure 2). Two microphones are used to create a stereo soundfield. These are connected to mic preamps to amplify the mic’s low level electrical signal. The “mic-pre” outputs are sent to analog-to-digital (A/D) converters that convert the analog electrical signal into a stream of numbers. The data from the A/D flow into a computer running digital audio workstation (DAW) software. The output of the DAW is routed to a digital-to-analog (D/A) converter, and the resulting analog signal is sent to an amplifier that drives high-quality stereo headphones.
While recording studios take great pains to isolate the studio from outside noise, we agreed Charles simply needed a reasonably quiet practice space, isolated from loud air conditioners and other common noise sources.
Great Mics and Digital Audio Interfaces
Music producers and engineers obsess over microphone choices and placement. Several different brands came up in our conversations, but one model was clearly preferred: a small diaphragm condenser mic with a cardioid (directional) pattern (Figure 3).
Since the lowest note (open E) of Charles’s guitar is about 83 Hz, the Sennheiser MKH 40 P48—with its nearly ruler-flat frequency response from 40 Hz to 20,000 kHz—can easily capture the full range of his instrument. Producer David Frost suggested that a pair of these mics be positioned very close to the instrument in order to pick up “as much of the guitar and as little ambience as possible.” The mics should be aimed at the guitar’s soundhole and several different techniques for angling the mics (Figure 4) should be tried for optimal stereo balance and sound quality (eg, see Auld3 and Boudreau et al4).
My colleagues agreed that instead of separate (and very expensive) individual components, the better quality “digital audio interfaces”—which integrate mic preamps and digital converters into a single box—would provide more than adequate quality for Charles’s purpose. These devices range in price from a few hundred to several thousand dollars, depending upon the number of audio channels and the quality of the electronics (Figure 5). The digital converters should be set to the industry-standard 96 kHz sample rate with a bit depth of 24 bits.5 Such a spectacularly high frequency response and wide dynamic range, which is likely beyond the discernment of even the most experienced listeners with normal hearing,6 will create an audio system of exceptional transparency.
Although closed over-the-ear headphone styles might prevent feedback, in-ear earphones or monitors (eg, Etymotic Research ER-4PT, Figure 6) will likely work best for this system. In-ear earphones feature ear tips inserted into the ear, which effectively block ambient sound, provide excellent feedback control, and deliver superb audio quality. For the most comfortable and effective fit, Charles should visit a hearing care professional to have custom ear tips made.
It is very likely that, even with his hearing loss, the headphone output built into the audio interface will provide Charles with ample undistorted gain to hear comfortably. If not, the unanimous and enthusiastically recommended choice for an external headphone amp was the $1,600 Grace Designs m903. “But it’s probably overkill,” said David Frost with considerable understatement!
So far, every effort has been made to optimize SNR and sound quality by using a quiet room, a “close-mic” technique, and some of the finest audio equipment available. Having gone to such lengths, many of the acoustical problems a musician with hearing loss might encounter during a typical practice session (eg, too much ambience) have been eliminated.
If we now add audibly significant amounts of signal processing like equalization or compression, we risk distorting the beautiful sound while adding little (if any) additional perceptual clarity to the music. Nevertheless, a small amount of DSP may be helpful and is readily available in the digital audio workstation software.
A modern DAW, such as Digital Performer (Figure 7) or ProTools, typically includes a breathtaking amount of control over gain, sound placement and phase, as well as numerous styles of equalizers, compressors, limiters, reverberators, and other specialized signal processors. However, in keeping with our approach that less DSP is more, Charles should try only a small amount of high frequency equalization if the guitar sounds muffled to him—about a 6dB boost around 3.5 kHz—but probably not much more.
For protection against unpredictably loud noises (which modern hearing aids address through various types of compression), he should place a limiter with a fairly fast rise time, a hard-knee, and a high compression ratio on the DAW’s final output fader (Figure 8). This creates a “brickwall limiter,” which ensures that dangerously high levels of gain (eg, a falling water glass) will not reach his ears.7 Charles should also experiment with a small amount of “parallel compression,”8 a technique that leaves high-amplitude signals unaffected while raising the gain of low-amplitude signals, similar to some compression techniques in hearing aids used to compensate for the decreased dynamic range common in hearing loss.2
Signal Gain Structure
The equipment described above is fairly straightforward to set up and connect. However, because there are several places where signal gain level can be adjusted (or misadjusted), it can be surprisingly tricky to set signal gain properly. This takes time, experience, and patience to do well, but the basic idea is to have the audio signal pass from the mics through the DAW software to the earphones at a level that is neither too low (adds noise) nor too high (adds distortion).
Once levels are correctly adjusted, however, Charles should be able to sit in front of the mics, remove his hearing aids, put on his earphones, set a comfortable listening level by turning up the volume on the headphone output of the interface, and hear his guitar as well as his ears will allow with extraordinary sonic clarity.
It is quite possible to set up a beautiful-sounding practice system similar to the one I’ve described here; in fact, I use a modified version of this setup (substituting digital synthesizers and samplers for microphones) to compose (see Einhorn9). However, such a system can easily cost more than $4,000 (plus computer) and also may be beyond the technical ability of some, but by no means all, dedicated musicians.
The purpose of this thought experiment, however, was not so much to “spec” a real setup as it was to focus attention on the differences in approach between the professional hearing loss and audio fields. Audio producers and engineers seek SNR optimization, have the luxury to use audio equipment with exceptionally high specifications running on AC power, and avoid extensive signal processing. This provides an amplified sound signal with stunningly realistic fidelity.
Practical, high-quality hearing aid systems designed for musicians with hearing loss might find ways to incorporate aspects of this approach into a similar one, perhaps by combining significant recent improvements in music perception via hearing aids10 with the ability to connect easily to high-quality external mics and other equipment.
1. Einhorn R. Practical approaches for maximising signal-to-noise ratio for music and other applications. ENT & Audiology News. July/August 2013:22.
2. Venema TH. Compression for Clinicians. 2nd ed. Independence, Ky: Delmar Cengage Learning; 2006.
3. Auld R. Recording the classical guitar. Recording Magazine. Available at: http://www.recordingmag.com/resources/resourceDetail/162.html
4. Boudreau J, Frank R, Sigismondi G, Vear T, Waller R. Microphone Techniques for Recording. Niles, Ill: Shure Corp; 2009.
5. Katz B. Mastering Audio. 2nd ed. New York: Focal Press; 2007.
6. Meyer EB, Moran D. Audibility of a CD-standard A/D/A loop inserted into high-resolution audio playback. J Audio Eng Soc. 2007;55(9)[Sept]:775-779.
7. MOTU Inc. Digital Performer 8 Plug-In Guide. Cambridge, Mass: MOTU Inc; 2012.
8. Robjohns H. Parallel compression: the real benefits. Sound on Sound. 2013; Feb. Available at: http://www.soundonsound.com/sos/feb13/articles/latest-squeeze.htm
9. Einhorn R. No compromise. Hearing Loss Magazine. May/June 2012:13.
10. Chasin M. A hearing aid solution for music. Hearing Review. 2014;21(1):28-30.
Original citation for this article: Einhorn R. Using professional audio techniques for music practice with hearing loss: A thought experiment. Hearing Review. 2014;21(8):30-33.
ALSO SEE: Film composer Jeff Rona’s Letter to the Editor regarding this article and some economical options for musicians.