By Douglas L. Beck, AuD, and Jennifer Duffey, MS

Hearing aid fittings are dynamic, patient-based processes. There are many reasons for dispensing professionals to use real-ear measures (REM) for each hearing aid fitting. Nonetheless, traditional REMs have not gained widespread clinical acceptance.

Although the exact reasons are not clear, perhaps dispensing professionals’ resistance to conducting real-ear measurements is partially based on the lack of a clear and consistent relationship between REMs and successful hearing aid fitting outcomes. Perhaps another reason is the lack of an appreciable increase in patient understanding or comprehension of aided benefit, secondary to traditional REM. Perhaps a third reason has been the sometimes confusing multiplicity of acronyms related to “REM-speak,” such as REIG, REAR, REIR, and RESR.

Visible Speech (VS) is built on a logical, scientific foundation, and offers many of the benefits of traditional REM. However, information obtained and displayed based on visible speech stimuli is more intuitive and pragmatic because:

1) VS is based on human speech. It is more engaging and meaningful to the patient than pure-tones, warble-tones, or other artificial sound stimuli.

2) VS represents a tool for explaining hearing loss to patients. It provides the dispensing professional, patient, and significant other(s) an excellent counseling and aural rehabilitation tool which simultaneously measures, verifies, and demonstrates aided benefit, based on human speech.

3) VS is intuitive for patients. It presents information in an easy-to-understand model based on the speech intelligibility index, allowing greater understanding and retention of the information via a multimedia presentation.

Minimally used REMs. Real-ear probe-microphone measures are important because they represent the only objective analysis of sound between the hearing aid and the tympanic membrane. However, Mueller1 reported that only about 1 in 3 hearing care professionals regularly obtained real-ear measures. Additionally, Strom2 reported that, although 57% of hearing care offices have real-ear equipment, REM tests on adults are routinely performed only 23% of the time. Therefore, it can be argued that traditional REMs are used only in a minority of hearing aid fittings.

But what about REM simulations? Many professionals use, and depend on, simulated real-ear screens for hearing aid fittings. Simulated real-ear screens are essentially proprietary tools, supplied by hearing aid manufacturers via their hearing aid fitting software, to represent averaged anticipated results. Importantly, simulations are not meant to replace patient-specific real-ear recordings. Unfortunately, accuracy in the absence of actual real-ear measures is highly unlikely due to multiple uncontrolled variables.3

As Audioscan’s Bill Cole cautions in Mueller’s article:

“Dispensers need to know that it is not only possible to use modern probe-mic systems to fit digital hearing instruments, it is almost mandatory…they cannot rely on manufacturer’s ‘first fit’ algorithms to deliver the potential of the instruments they fit, nor can they rely on manufacturer’s software simulations to show them what is going on in their client’s ear.”1

Aarts and Caffee4 reported on 41 subjects with measured real ear aided responses (REARs), as compared to predicted REARs. Less than 12% of the predicted REARs were comparable to actual REARs. The authors stated that, based on their findings, using predicted REARs is clinically inappropriate.

Mueller1 reported two trends related to real-ear measurement. First, there is an apparent movement towards REAR rather than real-ear insertion gain (REIG) for children and adults. Second, there is a movement towards speech and speech-like stimuli as the input signal to assess the hearing aid’s response via real-ear recordings—referred to as “speech mapping” or “visible speech.”

Van Vliet5 stated that using a manufacturer’s representation of the real-ear or 2cc coupler output is not much better than guesswork, and that to neglect using probe microphone measures to verify the true hearing aid fitting is irresponsible. He noted differences between measured results and representative results (ie, simulations) could be due to anatomic differences, equipment calibration differences, different assumptions, and other factors.

In summary, REMs are not typically used when fitting hearing aids, and real-ear simulations are just that: simulations indicative of general trends and interactions, and they should not be used as the basis of individual hearing aid fittings.

In short, the only way to know what’s really going on in the ear canal is to measure it!

Multimedia: Increasing Information Transfer

Margolis6 determined that one-half of all information transmitted from professionals to patients is retained, and perhaps as much as two-thirds is instantly forgotten. He suggested that, to increase the patient’s retention and recall, visual materials (photographs, charts, illustrations) should be offered as demonstration tools and given to the patient as take-home items.

Beck and McGuire7 noted a reasonably high probability that the spoken message from professional to patient is often not perceived correctly (ie, as intended). They reported that using high quality, easy-to-use and easy-to-understand multimedia tools increases the probability of the correct transfer of information. Visual images are powerful, emotional, and they initiate additional cognitive processes. Beck and McGuire suggested combined auditory and visual presentations are synergistic and provide the most powerful transmission and retention of information. It seems reasonable, therefore, that a patient-based multimedia presentation—based on live, recorded, or familiar speech—to help educate and counsel patients (regarding their hearing, hearing loss, and hearing aids) would likely facilitate more significant information transfer and retention than traditional clinical tools.

Visible Speech: Essential Concepts

The VS concept is simple and is based on three straight-forward concepts:

1) Speech is the single most important sound we listen to. For hearing aid fittings to be successful, speech must be appropriately amplified with respect to loudness, clarity, and comfort. Soft speech should be perceived as soft, medium speech sounds should be perceived as medium, and loud speech sounds should not exceed the patient’s loudness discomfort levels despite reduced dynamic ranges for people with sensorineural hearing loss.8,9

FIGURE 1. The Affinity system is a PC-based hardware platform that can be used when conducting clinical audiometry, hearing instrument testing, REMs, and visible speech.

2) It is difficult for patients to relate their hearing loss to their audiogram. Pure-tones, decibels, and hertz are difficult concepts for the average person. Although the audiogram is the common currency of hearing loss among dispensing professionals, it is considerably less meaningful and more difficult to understand for the patient.

3) Visual images facilitate counseling. Representing the patient’s ability to perceive human speech in aided and unaided conditions provides a powerful message which is easily recalled by the patient.

Within the Interacoustics product line, the Affinity (Figure 1) is a PC-based hardware platform with multiple software modules (ie, clinical audiometry, hearing instrument testing, REMs, and visible speech).

Visible Speech Stimuli

FIGURE 2. The in-situ headset has two probe tubes for left, right, or combined ear testing.

The acoustic stimulus for VS recordings is human speech. Speech can be generated using a live microphone input from the patient, their significant other, the audiologist, or speech samples can be digitally imported from other sources into the Affinity VS software. Digitized speech samples are available within the pre-packaged software, including adult-male and adult-female speech samples.

To acquire VS measures and recordings, the in-situ headset has two probe microphone assemblies which allow separate (or combined, if preferred) left and right ear VS recordings (Figure 2). The dispensing professional can elect to record and display monaural or simultaneous binaural acquired images. The in-situ headset is lightweight, easy to position on the patient, is available in pediatric and adult sizes, and is comfortable to wear.

After the patient has been instructed with regard to VS goals, positioned to observe the viewing monitor, and the headset has been carefully placed, the measurement, display, and verification of speech sounds can begin.

Speech spectrum analysis and the Speech Intelligibility Index (SII). The concept of presenting a “picture of spoken words” is not new to audiology, nor is the idea of “picturing” aided versus unaided speech. There have been many useful and creative speech intelligibility, audibility, and articulation index formulae.

Mueller and Killion10 noted calculations of the articulation index (AI) have been used for more than 55 years. The AI is basically a percentage of speech audible to a given patient based on their hearing thresholds. AI is a simple, easy-to-understand representation of “heard” compared to “not heard” speech sounds. Of course, the fact that a sound is heard does not mean the sound was perceived or is useful to the patient. However, in general, AI is an excellent teaching and counseling tool, and the general concept of the AI is carried forward in the Visible Speech module of the Affinity via the Speech Intelligibility Index.

The SII within the module has been calculated in accordance with the ANSI S3.5-1997 standard,11 presuming: 1) There is no external noise signal; 2) there is no self-speech masking, and 3) the speech signal corresponds to the standardized speech spectrum level for normal vocal effort. Within the Visible Speech module, unaided thresholds and UCL measures are shown on the VS-audiogram in dB SPL as measured at the eardrum. Data is presented at one-third octave center frequencies from 125 Hz to 8,000 Hz with consideration for the speech signal, a masking noise, and the hearing loss parameters.

FIGURE 3. AQ’s audiometric profile. Shown above are pure-tone thresholds and UCL values for the right ear. (click image for larger view.)

Target Speech Spectrum

The target speech spectrum for non-linear hearing aids is difficult to estimate and depends on many factors (eg, number of compression channels, bandwidth of compression channels, non-linear gain characteristics, time constants, fitting rationale, etc). The green area of the VS-audiogram is therefore a “target” speech spectrum, which is calculated as the standard speech spectrum amplified in accordance with a well-known linear fitting rationale (NAL).12 Theoretically, if the patient were to perceive all the sounds within the target range, they would obtain the best possible SII score, presumably similar to their word recognition score at MCL.

Example of VS in use. Patient AQ is a 71-year-old female with a bilaterally symmetric mild-to-moderate sensorineural hearing loss (SNHL). AQ reported wearing digitally programmable hearing aids for 6 years. About 8 months ago, she acquired two new DSP behind-the-ear (BTE) hearing aids. Although AQ was initially pleased with her DSP hearing aids, she had recently become frustrated and sought an opinion from another audiologist.

The audiologist obtained a comprehensive audiometric evaluation via the Affinity clinical audiometry software (Figure 3, above right). The audiologist placed the in-situ headset on AQ and acquired unaided, open-ear recordings using a digital speech sample as the stimulus (Figure 4). The unaided recording serves as a baseline from which one can compare aided results. The unaided speech spectrum was obtained using a digital speech file as the stimulus, available within the Affinity. Figure 4 shows a green area (middle of the graph) which indicates the derived target speech spectrum. The speech spectrum is derived from threshold and UCL measures, and is founded on Byrne and Dillon’s NAL model.12

AQ’s thresholds and UCL values were converted to dB SPL (via software) and were displayed on the graph. Thresholds are represented by the grey area on the lower portion of the graph, and UCL measures are shown near the top of the graph.

With her BTE hearing aids programmed as previously set, the audiologist acquired and demonstrated the original aided VS information (Figure 5). The aided response primarily consisted of low frequency amplification, representing a less-than-ideal hearing aid fitting. A rather poor aided SII score of 46% was recorded, consistent with AQ’s observation that she could “hear louder,” but could not hear clearly.

FIGURE 4. Unaided open-ear testing of Patient AQ. Grey areas are hearing thresholds (lower) and UCLs (upper) in dB SPL. The green area is the “target” speech spectrum, and the red area is AQ’s unaided speech spectrum. SII = 7%. (click image for larger view.) FIGURE 5. Green area is the “target” speech spectrum, and red area is AQ’s original aided speech spectrum. SII = 46%. (click image for larger view.) FIGURE 6. Green area is the “target” speech spectrum, which has been completely covered in this figure by the red area. Red area is AQ’s revised aided speech spectrum. Revised SII = 75%. (click image for larger view.)

The DSP instruments were reprogrammed and the result is shown in Figure 6. Note the red and green areas overlap significantly, indicating conversational speech is being amplified within AQ’s target speech range. An improved SII score of 75% was observed.

AQ was relieved and noted an immediate difference in sound quality and clarity for conversational speech at normal conversational levels. The audiologist verified appropriate gain and compression for soft, medium, and loud speech sounds.

Discussion

A successful hearing aid fitting is a dynamic, patient-centered process. To achieve optimum success, it is incumbent on the dispensing professional to tailor the process to meet the needs, abilities, and desires of the patient. Despite reasonable and well-intentioned hearing aid fitting rationales—and despite our ever-increasing obsession to hit (or nearly hit) prescribed targets—real-ear measures are typically not used in the majority of hearing aid fittings.

It is not clear why more dispensing professionals don’t use REM measures consistently, particularly given the precise and objective information they avail. One might speculate the lack of use is due to multiple factors. For example, evidence-based outcome measures specifically relating REM “target hitting” to successful hearing aid fittings are scant. Perhaps there is also a disconnect between traditional REM measures and the patient’s understanding of the process. In other words, is it possible that traditional REMs are a source of confusion rather than clarity from the patient’s viewpoint? Lastly, perhaps it can be argued that specific rationale-based hearing aid targets are designed for average patients, rather than specific patients. Therefore, specific patient-centered programming based on the patient’s auditory needs, perceptions, and preferences are likely to dominate the fitting process, regardless of the fitting rationale or target achievement.

Visible Speech allows the dispensing professional to record, demonstrate, and verify the appropriateness of the hearing aid fitting while reviewing, demonstrating, and explaining the process in terms the patient understands—based on human speech and the Speech Intelligibility Index.

Therefore, Visible Speech helps facilitate a patient-based understanding of hearing aid amplification for speech. Further, the multimedia presentation format helps engage the patient in the process, while facilitating greater retention of information presented.

Acknowledgement

The authors thank Claus Elberling, PhD, at Oticon A/S for his thoughtful review and comments in the preparation of this article.

This article was submitted to HR by Douglas L. Beck, AuD, director of professional relations for Oticon Inc, Somerset, NJ, and Jennifer Duffey, MS, an audiologist at Interacoustics, Eden Prairie, Minn. Correspondence can be addressed to Douglas Beck, AuD, at Oticon Inc, 29 Schoolhouse Road, Somerset, NJ 08875-6724; e-mail: .

References

  1. Mueller GH. Probe-mic measures: Hearing aid fittings most neglected element. Hear Jour. 2005;58(10):21-30.
  2. Strom KE. The HR 2006 dispenser survey. The Hearing Review. 2006;13(6):16-39. Available at: HR website.
  3. Hawkins DB, Cook JA. Hearing aid software predictive gain values: How accurate are they? Hear Jour. 2003;56(7):26-34.
  4. Aarts NL, Caffee CS. The accuracy and clinical usefulness of manufacturer-predicted REAR values in adult hearing aid fittings. The Hearing Review. 2005;12(12):16-22. Available at: HR website.
  5. Van Vliet D. When it comes to audibility, don’t assume. Measure! Hear Jour. 2006; 59(1):86.
  6. Margolis RH. In one ear and out the other: What patients remember. Found at: www.audiologyonline.com/articles/arc_disp.asp?article_id=54; 2004.
  7. Beck DL, McGuire, R. Multimedia: Better tools facilitate a better process. The Hearing Review. 2006; May, 2006. Available at: HR website.
  8. Schum DJ, Beck DL. Modern applications of multi-channel non-linear amplification. News From Oticon. 2005; October 2005.
  9. Schum DJ, Beck DL. Meta-controls and advanced technology. News From Oticon. 2006;January 2006.
  10. Mueller HG, Killion MC. An easy method for calculating the articulation index. Hear Jour. 1990;43(9):1-4.
  11. American National Standards Institute. ANSI S3.5-1997. Methods for Calculation of the Speech Intelligibility Index. Melville, NY: ANSI.
  12. Byrne D, Dillon H. The National Acoustic Laboratories’ (NAL) new procedure for selecting the gain and frequency response of a hearing aid. Ear Hear. 1986;7(4):257-265.