Inside the Research | January 2020 Hearing Review
By DOUGLAS L. BECK, AuD
As hearing aid processing becomes more complex, the area of psychoacoustics becomes increasingly important for understanding exactly what these devices are doing (or trying to do) for your patients’ compromised auditory systems. Jennifer J. Lentz, PhD, recently published a fascinating book, Psychoacoustics: Perception of Normal and Impaired Hearing with Audiology Applications, available from Plural Publishing (2020).1 We thought this was a good opportunity to catch up with her about some of the featured topics.
Beck: Hi Jennifer! Thanks for speaking with me today.
Lentz: Hi Doug. My pleasure. Thanks for the interest in my book!
Beck: For those not familiar with you, I’d like to share a little bit of your academic training. I know you earned you BS in Biomedical Engineering in 1993 from the University of Iowa and your MS and PhD from the University of Pennsylvania was in Bioengineering. What was your dissertation about?
Lentz: I was examining the application of psychoacoustic modeling techniques to normal auditory perception, which was a bit unusual at that time. After that I completed my post-doc training at Walter Reed Army Medical Center, studying the auditory perceptions of people with sensorineural hearing loss. I’ve been here at Indiana University in Bloomington since 2002, where I currently serve as Chair and Professor for the Department of Speech and Hearing Sciences.
Beck: To start with, it seems to me audiology training in psychoacoustics is generally quite limited, and may indeed even be absent for hearing aid dispensers—unless they pursue it on their own, or unless they were fortunate enough to have an extraordinary mentor. Is that your impression?
Lentz: Yes. That’s a fair observation and an important one, because hearing aid fittings are based on psychoacoustics, and psychoacoustics serves as the basis of these same hearing aid fittings. To be clear, books which focus on Hearing Sciences are important, but that’s a slightly different topic. Hearing Science books tend to primarily address healthy auditory systems, without doing a deep dive into the psychoacoustics or hearing science associated with abnormal auditory systems. Of course, studying the normal and healthy auditory system is important. But, ideally, that work should set the stage for a deeper dive into the auditory perception of people with hearing and listening disorders, so it can be applied to the patients seen in the clinic.
Beck: Can I get your thoughts on compression and how it impacts hearing aid fittings?
Lentz: Compression is a vast topic, and several books have been written about it. Of course, the compression circuits we have in 2020 are vastly different from the compression circuits we had 10 years ago. Any discussion about compression circuits would include attack and release times, as well as adaptive release times, input and output compression, and much more. And compression parameters are typically set the same in both ears, even though one ear may have a slightly different ability to hear tones, or speech, or to understand speech in quiet or speech in noise, than the other—and so testing the patient’s outcomes objectively and subjectively is very important.
That is, to simply apply a first fitting protocol based solely on pure-tone thresholds becomes problematic quickly. Additionally, there are people (like me!) with normal pure-tone thresholds who absolutely hear better in one ear than the other, which may indicate the compression settings for the better ear should be programmed less aggressively. I believe you published an article2 on that topic with some 20+ co-authors indicating that despite normal thresholds, many people have hearing difficulty and/or speech-in-noise problems.The point is there are people with normal pure-tone thresholds and symmetric hearing loss thresholds who have an asymmetric ability to listen. I know this gets deep into the weeds quickly, but sometimes when a person reports a preference for one ear over the other while using the phone, it might be wise to test deeper than thresholds.
Beck: Exactly, yes. That article explored more than a dozen reasons why some 26 million people in the United States are likely have hearing difficulty and/or speech- in-noise problems despite having normal thresholds. But back to your point, I think what you’re saying is one should not simply program a circuit, but one must also verify and validate (V&V) the fitting. The case you mentioned about asymmetric listening is interesting, as that may also be indicative of binaural interference, the incidence of which can run as high as 17%.3 Of course, we all learn these things in our doctoral programs, but in the real world, I think V&V assessment is extremely low.
Lentz: Yes, it seems to be disappointingly low. One shouldn’t just set it and send the patient out the door. These are very intricate circuits generally placed on the ears of people with abnormal auditory systems. Although one certainly benefits from well-known fitting protocols (such as NAL-NL2, DSL-5, etc), it just takes a few minutes to test the ability to understand speech in noise unaided (as a baseline) and then again, aided. And, of course, the aided SIN score should be better than the unaided, which would likely reflect more than just how much louder the sounds are, but also how much real-world benefit the noise reduction circuit is providing regarding understanding speech-in-noise.
Beck: I agree. If I only had one outcome measure to evaluate the effectiveness of a hearing aid fitting, it would be a SIN test.4 In the May/June, 2019 edition of Audiology Today, Lauren Benitez and I co-authored a new SIN test,5 which takes 2 minutes to administer, is essentially free, and can be administered in any language.
Let’s move on to noise reduction circuits. I think modern digital noise reduction (DNR) circuits are outstanding, yet many hearing care professionals are reluctant to use them, despite a vast supply of scientific and peer-reviewed articles from the last 10-15 years showing tremendous benefit. How do you explain noise reduction circuits and their value?
Lentz: I think it’s reasonable to start with amplitude modulation (AM), as that is a very important concept and modern hearing aids generally use some sort of AM detection algorithm to detect and attenuate noise. Specifically, there are AM patterns which indicate speech signals are present, and there are other patterns which clearly indicate steady-state noise, such as one might expect from fluorescent lights, air conditioners, etc…These patterns are easily recognized by DNR systems, allowing steady-state noise to be attenuated, without removing speech sounds.
I suspect many hearing care professionals simply turn DNR on or off based on their own philosophical approach. However, DNR circuits are generally built on AM, and these circuits are important to the successful use of hearing aids. Contrary to common belief, DNR circuits do not attenuate speech sounds. Further, typical AM noise reduction can provide 2 or 3 dB or more of steady-state noise reduction, and many recent studies have shown that using effective noise reduction allows the brain to better concentrate on speech sounds with less effort, as measured objectively by pupilometry.6
As a result, patient satisfaction increases, as does the ability to understand speech in noise (in some cases) and returns for credit decrease. I think many hearing care professionals believe noise reduction circuits are still the old low-frequency cuts (high frequency band pass) used decades ago, but those circuits have long ago been discontinued.
Beck: OK, and can you explain a little bit about interaural loudness levels (ILDs) and interaural timing differences (ITDs) and how important these are in hearing aid fittings?
Lentz: Sure. Interaural loudness differences are also called interaural level differences. ILDs are the differences in loudness due to the head shadow effect. ILDs are greatest when a sound comes from the right or left side (ie, ±90°); if the sound comes from the front (0°) or the rear (180°) or along the midline, it would be equally loud in both ears.
ILDs are almost a non-issue with regard to low frequencies as the wavelengths are so large, they simply wrap around the head and there isn’t much difference. However, when we’re talking about a signal coming from the left or right side, and if we measure ILDs between 2000 and 8000 Hz, the difference in ILDs can be vast—20 dB or more.
Further, if the sound comes from the left or right, the brain gets another useful cue: interaural timing differences. ITDs are a delay which allows the brain to know the sound occurred first or last, in the left or right ear. Interaural differences are used by the human brain to localize sound and to identify primacy (what happened first) as well as recency (what happened last, or most recently), as well as to estimate distance. All of these cues tell the brain a lot about where to focus and attend.
Regarding hearing aid fittings, there are some hearing aids which do capture and/or maintain ILDs and ITDs, and as you would expect, these sophisticated circuits do make it easier to listen because the brain is getting more of the psychoacoustic information it needs to untangle sound in the real world. Other circuits which don’t capture these differences may misrepresent the soundscape within the brain, and make it more difficult to understand SIN.
Beck: And what about localization and lateralization?
Lentz: Localization is the ability to identify the point in space from which a sound originates, and again that depends heavily on ILDs. In contrast, lateralization depends more-or-less on ITDs. As such, lateralization occurs within the head, as may occur with music listened to using an excellent set of headphones—when the sound is not externalized, it sounds like it’s originating within your head.
Beck: And these effects are also important for people with bi-modal fittings, such as a hearing aid on one ear and a cochlear implant on the other.
Lentz: Yes, it’s generally better to have acoustic information available on both sides, even if the sound isn’t as clear as one might like. In fact, even if one ear delivers no word recognition, binaural input to the brain may allow the brain to localize sound, so the individual knows where to focus and attend. But we have to be very careful because every patient is unique, and we really need to test and measure; we need to use V&V; we need to counsel appropriately, so the patient has reasonable expectations for whatever their auditory technology is.
Beck: With regard to digital remote microphones (DRMs) and FM systems, in general, they don’t preserve binaural cues (ILDs and ITDs), yet they offer a relatively amazing improvement in signal-to-noise ratio (SNR)—perhaps 12-15 dB or more. So then, given a child or an adult with an auditory processing disorder, hidden hearing loss, cochlear synaptopathy, or auditory neuropathy spectrum disorder, in general, is it more important to give them binaural cues or an excellent SNR?
Lentz: This is a difficult question, because we just spoke about the value of binaural input, and that is very well-known science. However, given people with listening difficulties, I think if you had to pick one or the other (binaural information or 12-15 dB improvement in SNR), the SNR is more important for most people most of the time. But again, each person is unique, and you want to make sure that whatever you do, V&V gives you assurance regarding the best answer for that individual.
Beck: Thanks for the fascinating discussion. It’s been a pleasure!
Lentz: Thanks Doug. I enjoyed it too. Thanks for your interest in the book.
1. Lentz JJ. Psychoacoustics: Perception of Normal and Impaired Hearing with Audiology Applications. San Diego, CA:Plural Publishing;2020.
2. Beck DL, Danhauer JL, Abrams HB, et al. Audiologic considerations for people with normal hearing sensitivity yet hearing difficulty and/or speech-in-noise problems. Hearing Review. 2018;25(10):28-38.
3. Mussoi BSS, Bentler RA. Binaural interference and the effects of age and hearing loss. J Am Acad Audiol. 2017;28(1)[Jan]:5-13(9).
4. Beck DL, Nilsson M. Speech-in-noise testing: A pragmatic addendum to hearing aid fittings. Hearing Review. 2013;20(5):17-19.
5. Beck DL, Benitez L. A two-minute speech-in-noise test: Protocol and pilot data. Audiology Today. 2019;31(3):28-34.
6. Beck DL, Ng E, Jensen JJ. A scoping review 2019: OpenSound Navigator. Hearing Review.2019;26(2)[Feb]:28-31.
About the author: Douglas L. Beck, AuD, is Executive Director of Academic Sciences at Oticon Inc, Somerset, NJ. He has served as Editor In Chief at AudiologyOnline and Web Content Editor for the American Academy of Audiology (AAA). Dr Beck is an Adjunct Clinical Professor of Communication Disorders and Sciences at the State University of New York, Buffalo, and also serves as Senior Editor of Clinical Research for The Hearing Review’s Inside the Research column.
CORRESPONDENCE can be addressed to Dr Beck at: firstname.lastname@example.org
Citation for this article: Beck DL. Psychoacoustics: Auditory perception in normal and impaired hearing: Interview with Jennifer Lentz, PhD. Hearing Review. 2020;27(1):26-27.