This article was submitted to HR by Francis Kuk, PhD, director of audiology; Denise Keenan, MA, Heidi Peeters, MA, and Chi Lau, PhD, research audiologists; and Bryan Crose, BS, research assistant, at the Widex Office of Research and Clinical Amplification (ORCA) in Lisle, Ill. Correspondence can be addressed to Francis Kuk, Widex ORCA, 2300 Cabot Dr, Suite 415, Lisle, IL 60532; e-mail: .
In previous papers we discussed the rationale for linear frequency transposition1 and highlighted the importance of choosing the right start frequency.2 In this paper, we will discuss another critical factor that affects the successful use of frequency transposition: that of a guided experience with the transposed sounds. We will discuss why some wearers may require a longer adjustment period, and what can be done to facilitate the initial adjustment to frequency transposition processing.
Some Hearing Aid Wearers React Negatively to Transposition. Why?
Despite the success of the frequency transposition algorithm (Audibility Extender, AE) used in the Inteo,3 some wearers may comment negatively on the sound quality of this algorithm when they listen to it for the first time. The more common descriptions of the transposed sounds include “raspy,” “harsh,” or “unnatural.” If left without intervention, some of these wearers would prefer not to have transposition (or to have no amplification at all). Understandably, some clinicians abide by their patients’ wishes and remove the AE program. Others may adjust the AE parameters drastically to effectively remove any impact from transposition.
The reason for the patients’ objections is not difficult to understand. Most people do not like changes—especially big changes. One can imagine the annoyance we would feel if we were wearing a hearing aid that provides 20 dB of gain. Even though we can hear many times better with the hearing aid, many of us would rather not wear it.
From a neurophysiological standpoint, the consequences of a sensorineural hearing loss (SNHL) are seen not only at the peripheral level, but also at a cortical level. With SNHL, the auditory cortex that receives input from the peripheral level may reorganize its tonotopic representation.
Willott4 showed that cortical neurons in the high frequency regions become responsive to adjacent middle frequencies. Thai-Van et al5 showed that frequency discrimination at the slope of a high frequency hearing loss is heightened. With amplification (and transposition), the new information that becomes available will be “foreign” to the brain and could be perceived as “unnatural.” For the brain to recognize the new information as natural, new space will have to be allocated for the new cues, or a different neural representation that utilizes the existing neurons must be formed.
|FIGURE 1. Happy face analogy to show the effect of a hearing loss and the consequence of amplification/frequency transposition. Frequency transposition allows the clinician to move vital, previously unaidable information into the listener’s audible hearing range.|
A visual analogy may help in understanding this idea. Imagine that the full spectrum of sounds can be represented by the happy face shown in Figure 1a. The height of the face reflects the intensity range, whereas the width of the face shows the range of frequencies that can be heard. A common consequence of a hearing loss is that the height of the face is reduced (soft sounds are not heard) and the width of the face is narrowed because fewer frequencies are audible. The visual analogy is seen in Figure 1b, where the bottom of the face and the left ear are not shown.
If all the frequencies are aidable and all are aided adequately (and assuming that slow-acting compression is used), the result of conventional amplification can be shown in Figure 1c. In this case, the high frequency information that was previously inaudible will now become audible (left ear shown again). Although Figure 1c is identical to Figure 1a (normal hearing), the wearer will have to adjust to seeing the face with the left ear again from what he or she is accustomed to with the hearing loss (half the face without the left ear, Figure 1b). Either cortical space needs to be re-allocated for the new information or a different neural representation is necessary.
Unfortunately, not all the high frequencies can be amplified to become audible. Some may be “dead”6 from the extensive damage to the inner hair cells. This results in a narrower range of sounds that can be amplified. The effect of conventional amplification with a dead region is shown in Figure 1d. The left ear is missing to illustrate the effect of the dead high frequencies. In this case, the patient may need to develop strategies to “fill in” the missing left ear (or high frequency information), or suffer the consequences from the missing high frequency information. Nonetheless, with amplification, the patient has to learn to accept the happy face shown in Figure 1d from where he or she started (Figure 1b).
Figure 1e shows the case of amplification with frequency transposition. In theory, the rationale for transposition is to preserve the temporal and spectral integrity of the input below the start frequency, while adding the high frequency information onto the lower frequencies. The visual analogy of frequency transposition is that the “residual face” is unaltered (various facial features remain in the same place), and the missing left ear is moved back into the face so that it will have all the necessary features (eyes, ears, etc). However, the position of the ear is different from “normal.” The addition of the left ear makes the face look “unnatural” from Figure 1a (normal hearing) and Figure 1b (hearing loss). One needs time to acclimatize to the new face.
In all these scenarios (Figures 1b, 1c, 1d), the effect of signal processing results in a happy face that is distinctly different from what the patient is accustomed to hearing (Figure 1b). It will take time for the auditory cortex to reorganize from the hearing loss condition so the new inputs can be appropriately represented. Gatehouse7 reported that it took 8 to 16 weeks post-fitting for the wearers to realize the benefit of high frequency amplification (not transposed).
What can be done to facilitate initial adjustment? Because patients who can benefit from frequency transposition are not accustomed to the resulting sound that the new auditory information yields, the clinician needs to resolve their initial objections. A key decision is to determine the legitimacy of the objection. Is the objection a “natural” or expected response of the particular processing strategy, or is it a reaction to suboptimal settings of the hearing aid/algorithm? If the former is the reason for the objection, counseling may be all that is needed to resolve the issues. If the latter is the reason for the objection, then resetting the parameters may be necessary.
Distinguishing Between Expected Response and a Suboptimal Setting
To distinguish between an expected (normal) response and an inappropriate hearing aid setting, clinicians have to be knowledgeable of the actions of the signal processing algorithm. They should also realize the perceptual consequences of the processing as well as the hearing aid history of the wearers. In addition, knowledge of the efficacy of the algorithm could guide the clinician in his/her decision.
For example, one common objection from wearers of nonlinear hearing aids who have been wearing linear hearing aids is that the nonlinear hearing aids sound “too soft.” Of course, most clinicians now realize that this is the result of compression (ie, less gain) for the louder sounds (compared to their former linear hearing aids) to which the wearers were accustomed. But the same complaint could also mean insufficient gain of the hearing aid. However, if the clinicians had measured the aided thresholds with the nonlinear hearing aids and found them to be around 20 dB HL across frequencies, one can reasonably conclude that the objection is a natural reaction to the use of compression.8 Counseling may be the best means to solve the patient’s objections. In contrast, if the aided thresholds with nonlinear amplification were even higher than those obtained with the previously worn linear hearing aids, insufficient gain may be the culprit. A readjustment of the hearing aid settings may be in order.
For the same reason, one should verify the appropriateness of the AE settings before deciding if the wearer’s initial objections are due to inexperience with the AE or suboptimal settings on the AE. The merit of using /s/ to determine the start frequency (the individual approach) described in Part 1 of this paper not only helps in measuring the start frequency, but it also guarantees the optimal setting of the parameters for frequency transposition.2 Consequently, if the start frequencies had been chosen with the individual approach and verified with the SoundTracker (described in a later section), one can be confident that the chosen parameters are optimal and the wearers’ initial objections are natural responses from the initial use of frequency transposition.
Knowing the efficacy of the AE algorithm in the Inteo could also strengthen the clinicians’ confidence in their choice of the AE parameters. Although we are continuing to gather efficacy data on this algorithm, Korhonen9 has shown that normal-hearing people with a simulated hearing loss improved by an average of 15% in consonant identification with the AE. Kuk et al3 showed that hearing-impaired people wearing thin-tube, open-ear Inteo élans also improved by a similar magnitude on the Nonsense Syllable Test (NST). While these studies may not guarantee that every hearing-impaired patient would benefit from the AE (or that they would receive the same magnitude of benefit), they do demonstrate that the AE algorithm provides additional cues that may be usable if recommendations are followed.
Adjusting to the New Sounds
Assuming that the initial settings on the Inteo hearing aid are optimal, it would seem logical to counsel the patients on the normalcy of the “unnaturalness” of the sound. Minimally, counseling and setting the right expectation will be necessary.
The challenge is in knowing the most effective way to achieve the objective. On the other end of the continuum is the possible need for auditory training and whether or not that will enhance the acceptance of, or adjustment to, the AE. If training is necessary, the kind of training and how to ensure that it is delivered and received are critical considerations. Currently, auditory training programs are not widely used for reasons of cost, time, appropriateness, and lack of motivation. Many clinicians do not believe that training is necessary because of the improvement in technology; many consumers do not expect training to be necessary because they expect hearing aids, like eyeglasses, to correct for their hearing difficulties automatically. These factors are necessary to consider if one desires to integrate auditory training as part of the hearing aid fitting/rehabilitation protocol.
As hearing care professionals, we must always remind ourselves that the audibility of a specific sound does not guarantee immediate wearer recognition or identification of the sound, or that the sound can be processed to form meaningful concepts. We must ensure that we achieve audibility through the use of appropriate means of amplification, and ensure that the improved audibility can be successfully used by the patient. Although hearing aid wearers may use the new acoustic information and form associations between acoustic percepts and meanings on their own, such associations may take longer to realize if no guidance is provided. Counseling and simple directed activities may facilitate the rate at which patients adjust to the settings on the hearing aids, and thus realize improved performance sooner.
One approach to facilitate the patient’s acceptance of the transposition program is to heighten awareness of the differences between the master program (no transposition) and the AE program (with transposition). This may be reinforced by training them on the specific sounds that are most affected by frequency transposition. The goal is such that wearers can focus their attention on the differences between the processing, understand the normalcy (and benefits) of the perceived differences, and learn to associate the perception with meaning for improved sound recognition, including speech. The two components of our approach include:
1. Increase awareness of the transposition processing. The patient should be instructed on the use of the [master] and the [AE] programs after the appropriateness of the settings has been verified.2 The difference in processing between the [master] program and the [AE] program may be explained using the real-time SoundTracker simulated real-ear display10 on Compass. Bird songs and conversational speech may be used as stimuli to illustrate the difference in processing between the [master] and [AE] programs. In describing the differences, one may consider using the following text:
|FIGURE 2. Real-time SoundTracker view of the master program (2a) and the AE program (2b). The colored bars represent the 15 channels on the Inteo. Bars above the “blue” solid line represent audibility at that frequency channel. One sees that in sounds in the master program (2a) at and above 2500 Hz (green bar) are not audible. In the AE program (2b), the sounds above 3200 Hz are now moved down and become audible. Note the 4000 Hz signal (blue bar) that was not audible before is now audible at 2000 Hz (blue bar on 2000 Hz and above sensogram).|
Mr. Jones, this graph shows me what you hear in each of the 15 channels of the hearing aid [see Figure 2a with master program]. When the bars exceed this line, which represents your hearing acuity, you hear that sound. Otherwise, that sound—even though it may be amplified—remains inaudible to you (you may explain why that’s not audible if asked by the patient). As you can see, as I am speaking to you, the part of my voice in the higher pitches remains below your hearing acuity line.
The additional listening program I gave you takes a different approach to making you hear those high-pitch sounds [switch to AE program, Figure 2b]. Rather than just amplifying them, this program takes the high-pitch sounds in the yellow shaded region and moves them down to a lower pitch. You can see the darker colored bars here. And you can see that they are now above the hearing acuity line. That means you are now hearing the high-pitch sounds, but you’re hearing them as a lower pitch substitute.
You can see that the patterns of sounds between the [master] and the [AE] programs are quite different. You are hearing more sounds in the [AE] program—but it may sound “raspy” because you now hear the high-pitch sounds you missed before. It may sound unnatural because it is not the same as what you typically hear. We know from other people that, in time, the perception of these sounds will become more natural as you begin to integrate the new information into your repertoire of sounds.
The clinician may instruct patients to listen to a list of everyday sounds with each program (master and AE) in order to heighten the awareness of sounds. By directing patients’ attention to specific sounds, we have seen that they become acquainted with the AE sooner, appreciate the advantages of the AE more (ie, hearing more sounds with the AE than the master program), and are able to form the necessary association between sounds and meanings more quickly. Patients should be instructed to indicate their preference for the master and AE program at the end of a 2-week follow-up evaluation. To ensure that the AE program is used in patients’ daily lives, one may also activate the SoundDiary (data logging) so the total time that each program is used may be monitored. If the AE program is used infrequently, one may conclude that its use may not be as acceptable and that fine-tuning may be necessary. The instructions to patients can be something like:
In order to facilitate your experience with the new processing, we have collected a list of everyday sounds that many people with a hearing impairment told us they were surprised to hear when they first wear these hearing aids. I would like you to try to pay attention to each of the listed sounds in the next 2 weeks. I would like you to try both programs for each sound situation, and see if you can tell any differences between them. You are welcome to use the program you prefer in your daily activities, but you must try both programs first before settling on one program.
2. Directed training on sounds most affected by the AE program. A 10-day, PC-based exercise program was developed for research purposes in order to provide patients with directed training on voiceless consonant speech sounds. These are sounds that are most amenable to the action of frequency transposition. The speech sounds that are targeted include: /p, t, k, s, f, ƒ, tƒ/. This is a “bottom-up” training where the patients’ attention is directed to different sounds every day. As training progresses, new sounds are introduced. Each sound is trained at the syllable level (paired with the vowel /i/), the word level, and the sentence level. The materials chosen were judged to be at a 6th grade reading level.
In order to maintain the interest and attention of potential patients, the exercises are divided into different interactive “game” activities. There is a “discrimination” task where patients have to identify which word of a pair (cat, cats) is spoken in a sentence. There is an “attention” exercise where patients have to count the number of times a target phoneme is said in a sentence. An “identification” exercise uses a crossword puzzle where patients have to listen to a sentence for clues in order to do the puzzle. A “memory” exercise requires that patients have to hear the target sound and select the sound that they hear from a list of options. Consequently, even the goal of the exercises is to increase patients’ awareness of the target voiceless consonant sounds, and the format encourages the use of various auditory processing skills.
Patients are encouraged to listen to each target sound as many times as needed in order to develop familiarity with the sounds. To increase the generalizability of the training, three different speakers (two female and one male) were used to record the training materials. It takes about 20 to 30 minutes for patients to complete the daily exercises. They perform the exercises for 5 days, rest 2 days, and then complete the remaining 5 days of exercises. To ensure the use of the AE program, patients should wear the Inteo hearing aids with the AE program only.
It should be stressed that the exercise described here was developed only for the purpose of evaluating the [AE] program for high frequency hearing losses. Other auditory training materials may also be appropriate.
|FIGURE 3. Audiogram of Subject J.|
“Subject J” was in his late 60s, and had been wearing binaural digital CIC hearing aids for his precipitously sloping high frequency hearing loss (Figure 3). He was dissatisfied with his current aids, stating that they were no different than not wearing any hearing aids. He was skeptical about the benefits of the study hearing aids, but was open to trying new technology. Because of the configuration of his hearing loss, thin-tube, open-fit Inteo élans (IN9-e) were used.
Initially, he was reluctant to wear behind-the-ear (BTE) hearing aids. Once we placed the BTE over his ears and started discussing the details of the study, he immediately noticed an improvement over his previous hearing aids and agreed to try the Inteo élan hearing aids.
Fitting the hearing aids proceeded with the recommended sensogram and feedback test. Only two listening programs (a master and an AE program) were assigned. The start frequency of the AE program was measured at 4000 Hz in both ears, using the approach described in Kuk et al.2 Subject J was very pleased with the performance of the master program, and described the sound quality to be the best he could ever remember. On the other hand, his initial reaction to the AE program was marginal. He described an “unnatural” sound quality and commented that it sounded “raspy.” He would have preferred just the master program; however, he reluctantly accepted the trial of the AE program. His performance on the NST at a 50 dBHL level was 52% for the master program and 58% for the AE program for consonants. It was 72% for vowels for both the master and AE programs. He was counseled on expectations for the AE program.
At the end of the initial session, the positions of the master and AE programs were randomly assigned by another audiologist, so both the study audiologist and the subject were blind to the settings on the hearing aids (ie, double-blind study). The data-logging feature on the Inteo was also activated to measure the frequency of use for each program.
Subject J returned in 2 weeks for a second visit. He reported that the “raspy” and “harsh” sound quality that he heard on the first visit was no longer an issue. From the SoundDiary, we noted that he used the AE program almost 80% of the time. When asked if he would find two programs beneficial, he replied that he could not see the benefit of the master program because he was hearing so much more with the AE program. He also reported that he could hear his wife and the flight attendant better while flying. With the AE program, he was more relaxed while traveling. He also enjoyed hearing other sounds that he had not heard for a long time, such as bird songs and the beep of a golf cart when it was in reverse.
During the second visit, we repeated the NST measurement at 50 dBHL. His consonant score was 47% for the master program and 58% for the AE program. His vowel scores were 75% and 80% for the master and AE programs, respectively. As a baseline, we also measured his preference for the AE program using bird songs, music, and discourse passages. For bird songs, the AE program was found to be the same as or better than the master program 70% of the time. The preference for AE using music stimuli was 60%, but it was 0% when discourse stimuli were used. Subject J still preferred the master program to the AE program for listening to speech, and the reported “raspy” sound quality was not an issue.
|FIGURE 4. Subject J’s averaged NST consonant (left) and vowel (right) scores for the master and AE programs at the initial, second, and third office visits.|
At the end of the second session, we deactivated the program control button on the Inteo hearing aid and made the AE program the first and only default program. We also introduced the patient to the 2-week training CD and instructed him on its use.
When he returned after 2 weeks, his consonant scores on the NST were 55% for the master program and 72% for AE program. His vowel scores were 70% and 84% for the master and AE programs, respectively. Subjectively, he found the AE program to be the same as or better than the master program 100% of the time when bird songs and music were used as the stimuli. When speech was used as the stimulus, he found the AE program to be the same as the master program 80% of the time. He was very pleased with the AE program and commented that, if such an option were available sooner, he would have been satisfied with amplification earlier. The performances of Subject J over time on the NST and the subjective stimuli are summarized in Figures 4 and 5.
|FIGURE 5. Preference for the AE program over the [master] program for bird songs, music and speech over time.|
In the case study, Subject J benefited greatly from the use of the AE program—but that benefit was not immediately apparent. Indeed, if one were to determine the success (or failure) of the fitting based simply on his initial comments, one would have taken out the AE program during the first session. If that were the case, the improvement in speech recognition in quiet, the increased preference for high frequency sounds (including birds, music, and speech) observed over time with the AE program would not have been realized. Thus, it is important that clinicians realize that changes in perception take time.
Clinicians must know the reasons for a patient’s initial objection (if it occurs)— whether it is a result of initial acceptance or a result of poor fitting—so they may act accordingly. From that standpoint, understanding the efficacy of the AE and experience with it are important. In addition, providing patients with activities that facilitate the acceptance of the device/algorithm will also enhance the rate of adjustment to frequency transposition.
|Critical factors in ensuring efficacy of frequency transposition, Part 1: Individualizing the start frequency. By Kuk et al.|
- Kuk F, Korhonen P, Peeters H, Keenan D, Jessen A, Andersen H. Linear frequency transposition: Extending the audibility of high frequency information. Hearing Review. 2006;13(10):42-48.
- Kuk F, Keenan D, Peeters H, Korhonen P, Hau O, Andersen H. Critical factors in ensuring efficacy of frequency transposition I: individualizing the start frequency. Hearing Review. 2007;14(3):60-67.
- Kuk F, Peeters H, Keenan D, Lau C. Use of frequency transposition in thin-tube, open-ear fittings. Hear Jour. 2007;60(4). In press.
- Willott J. Physiological plasticity in the auditory system and its possible relevance to hearing aid use, deprivation effects, and acclimatization. Ear Hear. 1995;17(3):66S-77S.
- Thai-Van H, Micheyl C, Moore B, Collet L. Enhanced frequency discrimination near the hearing loss cut-off: A consequence of central auditory plasticity induced by cochlear damage? Brain. 2003;126(10):2235-2245.
- Moore B. Dead regions in the cochlea: Conceptual foundations, diagnosis, and clinical applications. Ear Hear. 2004;25(2): 98-116.
- Gatehouse S. Role of perceptual acclimatization in the selection of frequency responses for hearing aids. J Am Acad Audiol. 1993;4(5):296-306.
- Kuk F, Ludvigsen C. Reconsidering the concept of the aided threshold for nonlinear hearing aids. Trends Amplif. 2003;7(3):77-97.
- Korhonen P. The effect of training on frequency transposition. Poster presented at: American Auditory Society 2007 Scientific Meeting; March 2007; Phoenix.
- Kuk F, Damsgaard A, Bulow M, Ludvigsen C. Using digital hearing aids to visualize real-life effects of signal processing. Hear Jour. 2004; 57(4):40-49.