Why we need to learn more and do more for our patients who love music (everyone)

Music & Hearing Loss | August 2014 Hearing Review

By Douglas L. Beck, AuD

Musicians (like engineers) can be a hearing care professional’s (HCP’s) worst nightmare and, on occasion, their greatest teacher. Working with musicians forces us to maintain current, technical, and pragmatic knowledge with regard to psychoacoustics, anatomy, physiology, acoustics, and physics, as well as a working knowledge of the hearing, listening, and amplification needs (and jargon) of modern musicians. In brief, HCPs have responsibilities that range from testing and protecting hearing, to advising musicians as to sound system practicalities, stage orientation and organization, in-ear-monitors, hearing protection devices, and more.

To be fair, addressing the hearing needs of the modern musician is simply not every HCP’s area of interest or expertise. Further, addressing the day-to-day needs of professional musicians is often challenging, yet always rewarding.

Admittedly, the HCP without experience and expertise in the needs and abilities of musicians may get a little tired of hearing that the “audiometer sounds flat” and “250 Hz should actually be 262 Hz” and that the musician has determined his/her tinnitus is “at C#, not D.” However, once we get past these idiosyncratic observations and behaviors, it’s all peaches and cream from there!

We (Doug and Marshall) have assembled this overview addressing diverse issues and factors pertaining to musicians. Knowledgeable musicians and knowledgeable non-musicians, all of whom work with musicians, have been brought together to create this August 2014 issue of The Hearing Review. We hope this special issue will serve as an adjunct to other professional materials addressing these same and similar issues. Rock on.

— Douglas Beck, AuD, & Marshall Chasin, AuD, guest-editors

In this special edition (August 2014 HR):

Music Benefits Across Lifespan: Enhanced Processing of Speech in Noise, By Nina Kraus, PhD, and Samira Anderson, AuD, PhD

The High Notes of Musicians Earplugs, By Patricia A. Johnson, AuD

The “Best Hearing Aid” for Listening to Music: Clinical Tricks, Major Technologies, and Software Tips, By Marshall Chasin, AuD

Using Professional Audio Techniques for Music Practice with Hearing Loss: A Thought Experiment, By Richard Einhorn

A Solution to Challenges Faced by Hearing-impaired Musicians Performing on Loud Amplified Stages, By Larry Revit, MA

The interaction between the human brain and sound is absolutely fascinating. I suspect you agree, as you (the reader) are very likely a Hearing Care Professional (HCP). Let’s start with some basic definitions.

Hearing is the perception (or awareness) of sound. However, and of significant importance, listening is applying meaning to sound. Humans differ from all other beings in their extraordinary ability to create language, which (more or less) applies meaning to sounds.1 Human languages are essentially infinite as they describe concrete and finite things, as well as things, places, and experiences we’ve never had! Language allows us to describe particles too small to be seen with the most powerful microscopes, and infinitely large universes too large to imagine.

Our steadfast grip at the top of the food chain has little to do with hearing. Indeed, cats, dogs, whales, bats, and many other beings have hearing that encompasses different and greater spectral ranges than humans. However, what sets humans apart is their ability to apply meaning to sound (ie, listening). Human listening ability is unmatched across all other beings, and listening is what sets humans at the top of the food chain.1

Further, and while we’re still addressing definitions, consider that to be an “expert” in anything requires some 10,000 hours of practice, training, and preparation. That is, to be an expert skier, backgammon player, musician, pilot, or swimmer requires lots and lots of practice. Ten thousand hours is the equivalent of practicing for 24-hour days for some 417 days. Or, perhaps more reasonably, most experts might practice their task 4 hours daily, every day, for 7 years. Further, by the time one has practiced music long enough to be an expert, the brain of the musician has changed!2

Music as Language

Secondary to 10,000 hours of training, the musician’s brain undergoes “involuntary aural rehabilitation.” As a result, the musician’s brain no longer responds to music in the typical way, as most non-musicians’ brains do. After 10,000 hours, the musician’s brain quite literally applies “meaning” to musical sounds. The musician cannot relegate music to background. The musician hears minor chords, major chords, key changes, 7ths, and more, and can (most often) replicate what they listen to without benefit or need of sheet music. Indeed, most musicians can listen to, interpret, and perform most contemporary (and lots of other) music via attentive listening. For the expert musician, music is absolutely a language (much like American Sign Language [ASL] is a real and meaningful language to those fluent in ASL) and the musician’s brain applies meaning to music,3 as the non-musician’s brain applies meaning to conventional speech (and other) sounds.

Limits and Restrictions

As HCPs, we have been taught and we studied (and often only consider) a narrow view of sound. That is, we focus on the sounds that are useful for medical/diagnostic/audiologic purposes. Specifically, we (HCPs) measure threshold responses (most sounds do not occur, and are not listened to, at threshold) for pure-tones from 250 to 8000 Hz (pure-tones do not exist in the real world and human hearing ranges from 20 Hz to 20,000 Hz, and may go higher in some adolescents, up to 25,000 Hz) in a sound booth (nobody hangs out in sound booths, except KEMAR), and we measure word recognition scores and/or speech reception thresholds in quiet (that’s not the problem the patient complained of!), as well as reflexes, tympanograms, and otoacoustic emissions (OAEs). It is from these typical audiometric measures we assess, diagnose, and manage people with hearing loss.

Admittedly, the measures and protocols (noted above) evolved for rational and well-founded medical/diagnostic/audiologic reasons—but they don’t address the pragmatic listening needs and abilities of our patients.

That is, the standard test protocols do not include speech-in-noise measures (SIN) or other measures that challenge and measure “functional” hearing, and unfortunately, there are no CPT codes that facilitate the measurement and comparison of SIN scores across different technologies to help decide which technology/protocol/algorithm is best for a given patient.

Specifically, the most common complaint of the patient with the most typical sensorineural hearing loss (SNHL) is understanding speech in noise (SIN). Yet very few HCPs routinely assess SIN ability, and as such, we are left to “infer” SIN ability based on the audiogram (and other diagnostic measures). However, the correlation between the typical mild-moderate SNHL and SIN ability approaches zero.4 That is, the measurement of 6 to 10 pure-tone thresholds in isolation (ie, 250, 500, 750, 1000, 1500, 2000, 3000, 4000, 6000, and 10,000 Hz) tells us nothing more than the type and degree of hearing loss. Threshold and typical audiometric measures are fine for the purpose of medical/diagnostic/audiologic queries such as 1) Is ear disease present? 2) Do I need to refer to a physician? 3) Is this a dangerous condition? However, the (above mentioned) typical audiometric measures fall short with respect to measuring how the two ears and the brain act and interact as a system.

Music vs Speech

The human auditory system maximally perceives (ie, hears) and understands (ie, listens and applies meaning to) sounds “pitch-matched” to the human voice. That is, the adult human ear canal maximally resonates between 2500 and 3000 Hz, and the most important speech sounds created by the human voice (ie, the second formant or “F2”) also reside in the neighborhood of 2500 to 3000 Hz. One might say the human ear evolved to maximally perceive human speech, or perhaps the human voice has evolved to produce sounds that the human auditory system can maximally perceive. Either way, one can argue the human voice is the most important sound we hear.

If we were to compare and examine the spectral content of speech sounds in detail, we would note 71% of all speech sounds are above 1000 Hz.5 I’ll wager all HCPs are familiar with this, and it’s fair to say we each include some form of this information while counseling patients who have high frequency sensorineural hearing loss.

However, what most HCPs are less familiar with is that 72% of the fundamental frequencies of the notes on a (standard) 88-key piano fall below 1000 Hz.6 One might make the argument that the most meaningful acoustic information embedded within speech renders speech more-or-less a high frequency event (71% of all speech sounds are above 1000 Hz) whereas music is more-or-less a low frequency event (72% of the fundamental frequencies on the left side of the piano are below 1000 Hz).

Limits of the Audiogram: Invisible Hearing Loss

Further, as the audiogram is the gold standard hearing test, the majority of (us) HCPs don’t look deeper than the audiogram while addressing speech-in-noise and/or listening complaints. That is, children who pass pure-tone screenings are rarely afforded the benefits of additional audiologic testing, such as SIN tests or spatial tests, which document their ability to tell where sound is coming from (spatial hearing). Specifically, when a child passes a pure-tone screening (and presuming the purpose of the pure-tone screening was to declare “pass” [indicating no further testing needed] or “fail” [indicating further tests recommended]), we’re unlikely to test further. However, for many children (and adults) with normal hearing, invisible hearing loss may be present in tandem with normal audiograms.

That is, if we were to test deeper and challenge the auditory system (two ears and the brain working together as a system), we might detect auditory neuropathy spectrum disorder (ANSD) and/or auditory processing disorders (APD) and/or spatial hearing disorders (SHD)—all of which most often coexist with normal hearing! Further, by challenging and evaluating the auditory system as it’s used in day-to-day listening (to listen to speech in noise), we might discover significant deficiencies in the way particular brains process speech-in-noise, despite normal hearing and often beyond the expected difficulties associated with mild-moderate SNHL.

Limits of the Audiogram: Music

Of significant importance is the fact that 250 Hz is the lowest tone typically tested on an audiogram. “Of note” (sorry, I couldn’t help it), 250 Hz approximates “middle C” on the piano. That is, a standard audiogram absolutely and totally ignores (does not represent) hearing across the entire left side of the piano!

Audiograms are excellent diagnostic tools for ear disease, but audiograms don’t tell us enough regarding functional hearing or what the patient actually perceives within their brain with regard to music, speech in noise, or other processing-derived and processing-dependent auditory percepts. That is, the correlation between a mild-to-moderate SNHL and one’s ability to understand speech in noise approaches zero.4

To discern someone’s ability to understand speech in noise, it must be tested. It cannot be inferred or ascertained from an audiogram. The good news is (most often) speech-in-noise ability is easily and efficiently determined and documented with commercially available speech-in-noise tests.7

Daniel Finkelstein8 reported that the Nobel prize-winning behavioral economist, Daniel Kahneman, describes his great intellectual breakthrough as “the realization that social science experts (including economists and HCPs) too often rely on research using samples that are too small, prompting highly unreliable conclusions….”

Consider, if we were to threshold test 250, 500, 750, 1000, 1500, 2000, 3000, 4000, 6000, and 8000 Hz, that would provide 10 data points that might theoretically represent thresholds (only) across the entire human hearing spectrum of approximately 19,980 Hz (20,000 minus 20). However, not only is a 0.0005% sample inadequate, but it has huge representative gaps, such as only 2 data points from 4000 to 6000 Hz (inclusive). Of course, one could argue humans only perceive some 1400 pitches between 20 and 20,000 Hz.  However, even given a scant 1400 discernible pitches and using standard behavioral statistics (5% alpha level and a confidence level of 95%), we would need 302 sample points to meaningfully estimate the population (of hearing) and again, we have 10.

To be clear, I am not suggesting we test all frequencies between 20 and 20,000 Hz! However, I am suggesting we admit we often don’t test enough or gather enough data to adequately determine what it is people actually perceive via audition. Further, I believe the diagnostic battery makes perfect sense for the purpose of diagnostics, but it is not highly representative of what the patient’s brain is listening to. (Of course, one cannot actually apply behavioral statistics to human hearing for multiple reasons, but I’m confident you understand the point: we measure only a very tiny portion of hearing via standard audiometric diagnostics, and we rarely measure the listening ability of the patient. Meanwhile, the most typical complaint that brings the patient into the office is their listening ability in noisy backgrounds!)

Chicken or Egg? Music or Speech?

I’ll tackle the Chicken-Egg question first with the argument I found plausible in 4th grade. The egg came first. It was a cross-breed by-product of two other bird-like beings. Those beings mated and two eggs resulted, one contained a male, the other a female…the rest is history. Easy Peezy.

As far as whether music predated speech, or vice versa, that’s more difficult. It can be argued music is more primal and has been around longer than speech, but good luck proving it!9 One can also argue the quantity of angels who dance on the head of a pin, or exactly how high is high…but again, proof is the problem.

Ani Patel of The Neurosciences Institute  in San Diego stated in a report by Hamilton3 that music taps into a pre-cognitive archaic part of the brain. Patel said Charles Darwin “talked about our ancestors singing love songs to each other before we could speak articulate language…” Of note, Patel reports other species have musical ability. For example, certain monkeys recognize dissonant tones, and many birds use complicated patterns of rhythm and pitch. Some parrots move in time with the beat. Thus, it appears music may be more primal, and undoubtedly music and “musicality” exists in the absence of speech, which may indicate it appeared first…but proof?

Brandt and colleagues10 state music underlies the ability to acquire language. They contend language is a subset of music. Further, they write  “spoke language is a special type of music…” and to be clear, music came first and language arose from music. Part of this hypothesis centers on the concept that infants hear sounds and infants discriminate the sounds of language, such as the more musical aspects of speech. Of note, Brandt and colleagues describe music in terms of “creative play with sound” and report “music” implies paying attention to the acoustic features of sound without a “referential function.” The authors report typically developing children start by perceiving speech as an intentional and generally repetitive vocal performance. They say infants listen for emotional content, as well as rhythmic and phonemic content, and the meaning of the words is applied later.

Prodigies and Music

Just for fun, consider that child prodigies most often express their special skills in music (or math and art). Beck11 notes prodigies almost always demonstrate extraordinary working memory (WM), not IQ. Boudreau and Costanza-Smith12 report WM controls attention and information processing. Indeed, WM might be thought of as “real-time cognitive juggling” or the mind’s ability to simultaneously manage and process hearing and listening, as well as retrieving and storing information. Of course, the information most often processed by the musical prodigy is auditory, which arguably suggests there’s something special about the way some humans handle music.

Conclusion

The relationship between speech and music is based on multiple shared and exclusive perceptual and processing similarities and differences, respectively. We cannot assume that, because we have defined (via an audiogram) a fraction of one’s ability to perceive sound, we understand their speech-in-noise ability or disability, and we certainly cannot make presumptions about their musical ability or perception, without measuring it. Speech and music are complex and dynamic. Although speech and music acoustically interact and overlap in spectral content, they should be assessed, diagnosed, and managed as separate (perhaps complementary) acoustic phenomena as we work with patients, musicians, and colleagues.

beck chasin author box References

1. Beck DL, Flexer C. Listening is where hearing meets brain…in children and adults. Hearing Review. 18(2):30-35.

2. Levitin D. This Is Your Brain on Music—The Science of a Human Obsession. New York City: Plume Publishing [Penguin Group]; 2006:197.

3. Hamilton J. Signing, singing, speaking: How language evolved. NPR Morning Edition. 2010. http://www.npr.org/templates/story/story.php?storyId=129155123

4. Metz M. Interview with Michael J. Metz, PhD, Author of “Textbook of Hearing Aid Amplification.” Available at: http://www.audiology.org/news/Pages/20140625.aspx#sthash.PmGcWEju.dpuf

5. Killion M, Mueller HG. Twenty years later, a new Count-The-Dots Method. Elk Grove Village, Ill: Etymotic Research; 2010. Available at: http://www.etymotic.com/publications/erl-0113-2010.pdf

6. Revit LJ. What’s so special about music. Hearing Review. 2009;16(2):12-19.

7. Beck DL, Nilsson M. Speech in noise testing—a pragmatic addendum to hearing aid fittings. Hearing Review. 2013;20(5):24-26.

8. Finkelstein D. Before we get into something we must know how to get out. Available at: http://www.theaustralian.com.au/news/world/before-we-get-into-something-we-must-know-how-to-get-out/story-fnb64oi6-1226710792081#

9. Gazzaniga MS. Human—The Science Behind What Makes us Unique. New York City: HarperCollins Publishers; 2008:235.

10. Brandt A, Gebrian M, Slevc LR. Music and early language acquisition. Front Psychol. 2012; ep 11;3:327 3. DOI:10.3389/fpsyg.2012.00327

11. Beck DL. On the importance of working memory with regard to hearing, listening, amplification, prodigies, and more. March 2014. Available at: http://www.audiology.org/news/Pages/20140420.aspx

12. Boudreau D, Costanza-Smith A. Assessment and treatment of working memory deficits in school-age children: the role of the speech-language pathologist. Lang Speech Hear Serv Sch. 2011;42:152–166.

Original citation for this article: Beck D. Issues and considerations regarding musicians, music, hearing, and listening. Hearing Review. 2014;21(8):14-16.