Tech Topic | March 2015 Hearing Review

Changing the game by changing the focus.

Douglas L. Beck, AuD

Douglas L. Beck, AuD

As we move beyond our current approach that centers on correcting for hearing loss to an approach that emphasizes feeding the brain optimal information about incoming sound, we can do a better job of allowing the hearing aid user to get the most out of their hearing. BrainHearing endeavors to help the brain orient, separate, focus, and recognize sounds in order to apply meaning through the use of modern and highly sophisticated technologies and improved hearing aid fitting protocols (ie, Personalization) to maximize hearing and listening.

Hearing and listening are often used as synonyms, yet they are each unique cognitive processes. Hearing can be defined as simply “perceiving sound” whereas listening is a far more sophisticated task; listening is defined as “attributing meaning to sound.”1

Arguably, for the first 100 years, our profession focused on “hearing.” In 2015, we have the ability to go further than simply making sounds perceptible. That is, in 2015, we can deliver additional acoustic information to better preserve the natural acoustic environment through improved delivery of soft speech sounds and better preservation of speech details, improved delivery of interaural loudness and timing differences, a more realistic extended bandwidth, better signal-to-noise ratios, improved noise reduction systems, improved feedback management systems and adaptive compression systems —many of which are “smarter” than previous generations and all of which are designed to maximize hearing and listening.

BrainHearing™—which is a proprietary and trademarked term by Oticon but will be used more generally as a broader industry concept in this article—emphasizes the importance and inclusion of the world’s most sophisticated processor (ie, the human brain) and the individual’s personal listening preferences to facilitate and maximize hearing and listening. Technologies and aural rehabilitation programs designed to support BrainHearing strive to maximize the ability of the ears and brain to work together through the preservation and delivery of natural acoustic information.

As all hearing care professionals (HCPs) know, hearing is a relatively basic, automatic function for those with normal hearing, as well as people with lesser degrees of hearing loss. Listening is a far more sophisticated process and listening can be defined as attributing meaning to sound.1 Listening involves multiple unique and sophisticated cognitive abilities, such as working memory, processing speed, and attention, and importantly the ability to compare and contrast auditory information from the left and right ears is essential to making sense of sound (ie, to attribute meaning to sound) in difficult listening environments.2

That is, binaural summation and binaural squelch are of significant importance with regard to listening in challenging acoustic environments. The brain’s ability to use interaural loudness differences (ILDs) and interaural timing differences (ITDs) to determine the origin of sound (ie, “knowing where to listen”) significantly contributes to the sophisticated acoustic and spatial analysis the human brain completes in milliseconds.

BrainHearing helps facilitate improved hearing and listening with less effort by supporting how the brain makes sense of sound, through the provision of key acoustic information as sound travels from the two hearing aids to the brain.

A Game Changer: An Emphasis on Better Pairing of Ears and Brain

Previously, the goal of hearing aid amplification was (more-or-less) to provide sounds that the patient was unable to hear. Previously, hearing aids were only able to deliver a restricted bandwidth of amplified sound.

BrainHearing provides the brain with more acoustic information (typically available via normal hearing) to maximally hear and listen, because perceiving sound and attributing meaning to sound are sophisticated cognitive processes. To listen maximally, the brain must orient, separate, focus, and recognize sounds to apply meaning to them. As hearing loss worsens, the quantity and quality of acoustic information delivered to the brain decreases, forcing the brain to work harder to make sense of sound, resulting in increased listening effort and increased cognitive load (see Desjardins and Doherty, 20143) while often rendering people with hearing loss (aided and unaided) exhausted at the end of the day.

In order for the brain to make sense of sound—particularly in difficult listening environments—the brain endeavors to compare and contrast sounds from the left and right ears. The brain maximally decodes (ie, untangles) not just the loudness information from each ear, but it also attributes meaning to the acoustic differences across the two signals. The perception of the differences between the two inputs (left and right ears) is extraordinarily important for the brain to decode and interpret (ie, apply meaning to) acoustic information in difficult acoustic environments.

Specifically, a more efficient and modern pairing of the impaired ear involves, but is certainly not confined to, many of the following factors.

Compression

Compression in modern hearing aids most often includes amplitude compression (eg, wide dynamic range compression or WDRC). Amplitude compression systems in modern hearing aids have been designed to keep sounds from becoming too loud too quickly, to reduce the need for volume wheels, to prevent sounds from becoming uncomfortably loud, and to allow more audibility into the progressively narrower dynamic range of the average aging patient with hearing loss. Although BrainHearing uses compression to protect the ear from sudden loud sounds and further hearing damage, a “floating window of linearity” is applied to the primary speech signal to maximally preserve interaural loudness differences, thus allowing the brain acoustic information that helps us know “where to listen,” while reducing listening effort in many noisy situations and facilitating better listening in difficult acoustic environments.

In an effort to provide full audibility to inaudible sounds, the natural tendency is to restore full access to all sounds available to a person with normal hearing. The problem is that different individuals with SNHL have different perceptions of the loudness of soft sounds.4 The amount of gain provided to soft sounds should provide access to as much soft speech information as possible, but this has to be accomplished without creating an unnatural sense of the loudness of sounds.

The new Oticon VAC+ fitting rationale (made possible by the new Inium Sense digital platform) has been specifically designed to account for the need to improve access to soft, high frequency sounds. When combined with the Personalization approach in the Genie fitting software, the HCP can make adjustments in soft speech access based on the individual loudness perception of each patient. Improved access to softer sounds is not just about access to the details of the softer segments of speech, it’s about an improved, more natural perception of the full sound environment.

Compression kneepoint and ratios. Compression systems are described by multiple factors. Arguably, the two best-understood factors are kneepoint and compression ratio. The threshold kneepoint is simply the sound pressure level (SPL) at which the circuit becomes engaged/active.

Compression ratios indicate the amount of compression above the kneepoint. That is, a compression ratio of 3:1 indicates that above a given compression kneepoint, SPL is compressed by a 3:1 ratio. For example, if the compression kneepoint is 50 dB, as the input increases by 30 dB (eg, from 50 to 80 dB) at the input microphone, the SPL output of the hearing aid would only increase by 10 dB.

Fast, slow, and adaptive compression release times. Commercially available compression release times also vary within and among hearing aid systems.5 Pittman, Pederson, and Rash6 note the terms “fast” and “slow” compression release times refer to the time it takes for the circuit to return to the nominal gain value.

“Fast-acting” compressors generally have release times of 100 milliseconds or less; “slow-acting” compressors may have release times from 0.5 seconds to 2 seconds. Unfortunately, neither fast nor slow release times offer a globally accepted universal solution. For example, slow release times may provide “drop outs” (insufficient amplification immediately after a loud input), and fast release times often distort the acoustic waveform and may inadvertently decrease the signal-to-noise ratio (SNR).

Adaptive compression, as found with Oticon’s SpeechGuard, monitors and detects significant changes in loudness at the input microphone to trigger the more effective and appropriate (ie, adaptive) release time. Thus, in the absence of significant loudness changes or given a decrease in loudness, longer release times are used. Likewise, in the presence of a significant increase in loudness, short release times are used.

Pittman et al6 compared 3 amplitude compression release times (slow, fast, and adaptive) with regard to the ability of children and adults with and without hearing loss to categorize words and environmental sounds in challenging listening environments. As expected, people with normal hearing performed better than people with hearing loss, and of note “listeners with normal hearing achieved optimal performance with slow acting compression.” However, Pittman and colleagues reported “listeners with hearing loss achieved optimal performance with adaptive compression.” They also reported “amplitude compression significantly affects perception of speech and environmental sounds,” and they concluded “listeners with hearing loss may derive significant benefit from hearing instruments that use adaptive amplitude compression, especially in complex listening environments.”6

Distortion

The vast majority of people with sensorineural hearing loss (SNHL) maintain some hearing ability. That is, the vast majority of people with hearing loss can perceive sound. However, as SNHL increases, multiple ear-based distortions occur, all of which decreases the quantity and quality of natural sound information reaching the brain. As SNHL increases, ears experience multiple and significant distortions: threshold distortion, dynamic range distortion, temporal distortion, spectral distortion, chemical and neurologic distortions.

Distortions associated with SNHL have the ability to degrade, to various degrees, the information transmitted along the auditory pathway prior to reaching the brain. Therefore, not only does hearing loss attenuate loudness, arguably and more importantly, SNHL distorts the acoustic information.

Given SNHL, the information sent from the ear to the brain is of lesser quality in multiple ways—all of which are unique to the individual, and all of which are impossible to accurately and completely convey on an audiogram. Further, given SNHL, it is the “distorted” auditory signal that ascends the central nervous system and is delivered to the brain for analysis. As one might expect, disentangling the distorted signal requires biologic energy and cognitive ability.

Unfortunately, as we age, our processing ability (quantity and quality) decreases, and the stress and strain of listening (ie, making sense of sound) increases. That is, as multiple distortions impact the original acoustic signal, the brain’s task of assigning meaning to the signal becomes increasingly difficult—even though one may “hear” the sounds. Intuitively, it appears that to hear and listen maximally, maintenance of natural sounds and delivery of non-distorted naturally occurring acoustic information make good sense.

Spatial Hearing

Spatial hearing allows us to identify the origin/location of sound in space. Arguably of greater importance, spatial hearing allows us to attend to a primary sound source in difficult listening situations (due to binaural summation) while ignoring/de-valuing sounds of lesser interest (due to binaural squelch).7-9 Specifically, interaural loudness differences (ILDs) and interaural timing differences (ITDs) must be maintained as much as possible throughout the auditory system to maintain the complete integrity of the original acoustic information and for the brain to know “where to listen.” Binaural summation and binaural fusion (secondary to the perception of ILDs and ITDs) are important spatial cues, and these allow the human brain to facilitate maximal listening (attributing meaning to sound) by focusing on the primary sound source within the acoustic environment.10

Maximal hearing and listening in difficult acoustic situations requires the brain to compare and contrast sounds from the left and right ears—in real time. Indeed, it is the difference between the left and right input signals that reveals to the brain key information for acoustic, speech, spatial, loudness, and other sound processing.

Therefore, to maximally understand speech in noise, amplification systems must maximally preserve spatial cues. That is, for spatial hearing to benefit the listener, the sounds received at both ears (and/or both hearing aids) must be transmitted to the brain in “real” time. Thus, the complete human hearing and listening system requires two ears and one brain in constant communication, so the brain can better know where to focus attention.

Personalization

Unfortunately, hearing thresholds as measured on an audiogram, fitting algorithms, real-ear targets, audiograms, word recognition scores, and otoacoustic emissions cannot tell us what a particular patient would prefer to listen to. All HCPs have witnessed the situation in which 2 patients with the same (or highly similar) audiograms are fitted with entirely different hearing aid fittings.

Indeed, each person’s perceptual system and their sound preference are unique. Even people with normal hearing do not necessarily listen in a homogenous way; their demographic variables and brain training influence their listening ability.11

Allowing patients to select a preferred sound from multiple reasonable alternatives empowers them to actively participate in their fitting solution and helps assure satisfaction with the recommended hearing solution. Real-ear measures, as well as other validation and verification measures, remain very important in the hearing aid fitting process. In fact, real-ear remains the only tool that documents the acoustic/physical characteristics of the ear canal and the sound delivered to the tympanic membrane.

However, “fitting to target” and “satisfying the patient” with an appropriate and pleasant sound result may offer synergies previously not available in hearing aid fittings. Johnson12 reported when patients are allowed to initially compare multiple validated hearing aid fittings, the HCP helps enable the patient to direct his/her auditory solution within a range of reasonable options. Personalization helps achieve a higher level of overall satisfaction, as it acknowledges that auditory processing capabilities and sound preferences differ across individuals.

Conclusion

BrainHearing is more than a buzzword. BrainHearing, in some respects, represents a philosophical change from hearing to maximal hearing and listening. In this way, our field moves from simply making sounds louder (amplification) to recognizing that the brain processes many psychoacoustic cues—and the more reliably these cues can be delivered to the brain, the better! BrainHearing endeavors to help the brain orient, separate, focus, and recognize sounds in order to apply meaning to sound through the use of modern and highly sophisticated technologies and improved hearing aid fitting protocols (ie, Personalization) to maximize hearing and listening.

The cognitive system integrates information from all of the senses to create a total impression of the world around the listener. Sounds must make sense when combined with all of the other information the system has garnered about the ongoing world around the listener. As we move beyond a focus solely on correcting for hearing loss to an approach that emphasizes feeding the brain the fullest amount of information about incoming sound, we can do a better job of allowing the hearing aid user to get the most out of their hearing.

References

  1. Beck DL, Flexer C. Listening is where hearing meets brain in children and adults. Hearing Review. 2011;18(2):30-35. Available at: https://hearingreview.com/2011/02/listening-is-where-hearing-meets-brain-in-children-and-adults

  2. Beck DL, Edwards B, Humes LE, Lemke U, Lunner T, Lin FR, Pichora-Fuller K. Expert Roundtable: Issues in audition, cognition, and amplification. Hearing Review. 2012;19(10)[Sept]:16-26. Available at: https://hearingreview.com/2012/09/expert-roundtable-issues-in-audition-cognition-and-amplification

  3. Desjardins JL, Doherty KA. The effect of hearing aid noise reduction on listening effort in hearing impaired adults. Ear Hear. 2014;35(5):600-610.

  4. Buus S, Florentine M. Growth of loudness in listeners with cochlear hearing losses: recruitment reconsidered. J Assoc Res Otolaryngol. 2002;3:120-139.

  5. Cox RM, Zu J. Short and long compression release times: Speech understanding, real-world preferences, and association with cognitive ability. J Am Acad Audiol. 2010;21(2):121-138.

  6. Pittman AL, Pederson AJ, Rash MA. Effects of fast, slow, and adaptive amplitude compression on children’s and adults’ perception of meaningful acoustic information. J Am Acad Audiol. 2014;25:834-847.

  7. Beck DL, Sockalingham R. Facilitating spatial hearing through advanced hearing aid technology. Hearing Review. 2010;17(4):44-47. Available at: https://hearingreview.com/2010/04/facilitating-spatial-hearing-through-advanced-hearing-aid-technology

  8. Kidd G Jr, Arbogast TL, Mason CR, Gallun FJ. The advantage of knowing where to listen. J Acoust Soc Am. 2005;118:3804-3815.

  9. Schneider BA, Li L, Daneman M. How competing speech interferes with speech comprehension in everyday listening situations. J Am Acad Audiol. 2007;18:559-572.

  10. Beck DL. Can advanced signal processing facilitate spatial hearing? Brit Soc Audiol. 2010;61[Dec].

  11. Fullgrabe C, Moore BCJ, Stone MA. Age-group differences in speech Identification despite matched audiometrically normal hearing: contributions from auditory temporal processing and cognition. Front Aging Neurosci. 2015;6:347. doi: 10.3389/fnagi.2014.00347 http://journal.frontiersin.org/ResearchTopic/2277

  12. Johnson EE. An initial fit comparison of two generic hearing aid prescriptive methods (NAL-NL2 and CAM2) to individuals having mild to moderately severe high frequency hearing loss. J Am Acad Audiol. 2013;24:138-150.

Douglas L. Beck, AuD, is director of professional relations at Oticon Inc, Somerset, NJ.

Correspondence can be addressed to: [email protected]

Citation for this article: Beck DL. BrainHearing: Maximizing hearing and listening. Hearing Review. 2015;21(3):20.