Expert Roundtable | September 2015 Hearing Review

Note: This is the sixth article in a seven-part special Expert Roundtable series published in the September 2015 edition of The Hearing Review, guest-edited by Douglas Beck, AuD. For all the articles in this Expert Roundtable, click here.

Chapter 6: To learn new things, you need a clear message

The first and best thing we can do for people with hearing loss is to provide them with a well-amplified speech signal. Even subtle improvements like widening the bandwidth can make a significant difference and provide patients with more information about the words they hear.

Words are one of the fundamental building blocks of knowledge and communication.

Most of the words we know we learned in childhood1 through a series of steps. Those steps allowed us to incorporate new words into our vocabularies and to strengthen our understanding of other words through experience.2-4

The first step in the process is detection of unknown words that, interestingly, occurs most often through direct and indirect communication with others.5,6 Detection triggers a configuration process in which the acoustics and the semantics (meaning) of new words are bound together. Configuration may not be perfect at first, but through multiple exposures and through interaction with other words in our vocabulary (ie, engagement), we eventually become comfortable with new words and incorporate them into conversation. This process happens dozens, perhaps hundreds, of times a day such that the average high-school graduate knows upwards of 20,000 words.7

Andrea Pittman, PhD

Andrea Pittman, PhD

But what if a child can’t hear well? A number of studies have examined the vocabularies of children with different degrees of hearing loss and compared them to children with normal hearing. Most studies use the Peabody Picture Vocabulary Test (PPVT)8,9 to quantify receptive vocabulary in terms of vocabulary age and standard score. PPVT data from our laboratory indicate that children with mild-to-moderate hearing losses tend to have vocabularies 2 years behind their normal-hearing peers.10 That’s equivalent to a child entering 3rd grade with a 1st grade vocabulary (not a great situation for the kid).

Research from Australia reported vocabularies of children with moderate-to-profound hearing loss were as much as 4 years behind their peers, putting them into a whole different category academically.11 A recent study in the UK showed that these vocabulary deficits persist through the college years.12 One interesting thing about the UK study is the college students thought they knew the meaning of many more words than they actually did. So, if a child can’t hear well, the fundamental building blocks of knowledge and communication are unstable.

To address this problem, we need to understand what it is about hearing loss that interrupts the steps to learning new words. We recently developed a series of experimental paradigms in our lab to examine each step closely. These steps include: 1) The recognition of familiar words; 2) The ability to categorize words as either familiar or new; 3) The detection of new words within sentences, and 4) The rapid learning of new words. For these tasks, nonsense words serve as proxies for “new” words so we don’t have to worry about which words listeners do and do not already know.

To date, we’ve used these tasks with children and adults with mild-to-moderate hearing losses and found similar results (we include adults in our studies because, like children, they regularly learn new information too). Our research shows that the effects of hearing loss are pervasive and reduce performance on every task.

speech amplification provides more information about the words we hearThe results of the second task—the ability to categorize words as either familiar or new—is a good example of the problem. In this task we asked listeners to repeat real and nonsense words. The test is administered just like a clinical word recognition test where perception is judged by the accuracy of the words produced. However, traditional word recognition tests are not without problems. First and foremost is the fact that two people (patient and clinician) are doing the perceiving, and both of them can make errors. By including nonsense words as stimuli, scoring accuracy can get out of hand quickly if it’s not done with care. We record the responses (via audio recordings) of the listener and have an independent examiner score the responses after the fact.

In addition to repeating each word, the listeners indicate if they heard a real or a nonsense word. This extra piece of information complicates the analyses compared to a word recognition test, but the results are worth it. It turns out that, for each type of word (real or nonsense), there are 5 different ways that a listener can get it wrong and only 1 way to get it right.

Our results show that listeners with normal hearing rarely make errors, and the errors they do make appear to be random. Listeners with hearing loss, on the other hand, make many errors and those errors tend to fall into two categories, both involving nonsense words. First, they recognize that the word they heard was nonsense and they say a nonsense word, but they don’t say it correctly. That kind of error appears to be a simple misperception of the nonsense words. When this happens, the listener may just need to hear the word again (“What?”).

The second kind of error is more troubling. For many of the nonsense words, listeners indicated they heard a real word and then they said a real word. This kind of error suggests that the listeners were automatically (unknowingly) repairing the nonsense words to be real.

The results make sense when you think about hearing loss and how listeners have to fill in missing information that they can’t hear. This kind of listening strategy probably keeps them in a conversation longer, but the same strategy may undermine their ability to identify words which they could be learning. Their listening and learning strategies are literally competing against one another. For children, this could be especially detrimental in academic environments.

But here’s the interesting thing: providing the right kind of amplification reduces these errors. Specifically, listeners with hearing loss made the most errors when they weren’t using hearing aids. Those errors were fairly evenly distributed between the two types of errors described above; yet when the listeners used hearing aids, their misperception errors went up and their repair errors went down.

That doesn’t sound like a good thing, but it is. Amplifying the speech signal allowed the listeners to recognize nonsense words for what they were, even though they couldn’t repeat them exactly right. Without hearing aids, they often didn’t know they were hearing nonsense words. When we improved their access to the speech signal further (by widening the amplification bandwidth to at least 8 kHz) their errors in both categories fell to equally low levels, the lowest for all listening conditions.

That’s really good news because it means the first and best thing we can do for people with hearing loss is to provide them with a well-amplified speech signal. Even the subtle improvements from widening the bandwidth made a significant difference and provided them with more information about the words they heard. The alternative is less attractive because it means that, when individuals with hearing loss aren’t receiving optimal amplification, they may be missing opportunities to learn new words. This could be responsible, in part, for the poorer vocabularies we see in children compared to their peers.

Although these results represent a small part of the word-learning process, the take-home message is applicable to nearly every aspect of learning. That is, a clear message helps individuals make the most out of every opportunity to learn new information.

References

  1. Bloom P. How Children Learn the Meanings of Words. Boston: MIT Press;2000:1-23.

  2. Storkel HL, Lee SY. The independent effects of phonotactic probability and neighborhood density on lexical acquisition by preschool children. Lang Cogn Process. 2011;26:191-211.

  3. Leach L, Samuel AG. Lexical configuration and lexical engagement: when adults learn new words. Cogn Psychol. 2007;55:306-353.

  4. Gray S, Pittman A, Weinhold J. Effect of phonotactic probability and neighborhood density on word learning configuration by preschoolers with typical development and specific language impairment. J Speech Lang Hear Res. 2014;57(3):1011-25.

  5. Akhtar N, Jipson J, Callanan MA. Learning words through overhearing. Child Dev. 2001;72:416-430.

  6. Akhtar N. The robustness of learning through overhearing. Dev Sci. 2005;8:199-209.

  7. Bloom P. Fast Mapping and the Course of Word Learning. 2001:25-53.

  8. Dunn LM, Dunn, L.M. Peabody Picture Vocabulary Test III. 3rd Ed. 1997.

  9. Dunn LM, Dunn DM. Peabody Picture Vocabulary Test IV. 4th Ed. 2007.

  10. Pittman AL. Short-term word-learning rate in children with normal hearing and children with hearing loss in limited and extended high-frequency bandwidths. J Speech Lang Hear Res. 2008;51:785-797.

  11. Blamey PJ, Sarant JZ, Paatsch LE, Barry JG, Bow CP, Wales RJ, Wright M, Psarros C, Rattigan K, Tooher R. Relationships among speech perception, production, language, hearing loss, and age in children with impaired hearing. J Speech Lang Hear Res. 2001;44[Apr]:264-285.

  12. Sarchet T, Marschark M, Borgna G, Convertino C, Sapere P, Dirmyer R. Vocabulary knowledge of deaf and hearing postsecondary students. J Postsecond Educ Disabil. 2014;27:161-178.

Andrea Pittman, PhD, is an associate professor and director of the Pediatric Amplification Lab at the Dept of Speech and Hearing Science of Arizona State University, Tempe, Ariz.

Correspondence can be addressed to HR or Dr Pittman at [email protected]

Original citation for this article: Pittman A. The amplification of new information. Hearing Review. 2015;22(9):24.

This article is one of seven chapters in a series of articles that review the key points addressed during the 2015 AudiologyNOW! session titled “Issues, Advances, and Considerations in Cognition and Amplification.” Follow the links to related chapters by Douglas L. Beck, PhD, Brent Edwards, PhD, Christian Füllgrabe, PhD, Gabrielle Saunders, PhD, Jason Galster, PhD, and Gurjit Singh, PhD.