Tech Topic | November 2019 Hearing Review

Identification of acoustic scenes using an enhanced signal classification system and motion sensors has been recently employed in the Signia Xperience hearing aids. This study evaluates the effectiveness of these systems in both the laboratory and real-world environments.

Modern hearing aids are very effective at restoring audibility. Signal processing also has progressed to the point that for some listening-in-noise conditions, speech understanding for individuals with hearing loss is equal to or better than their peers with normal hearing.1

It is no secret, however, that an important component of the overall hearing experience is the listener’s intent—or one’s desired acoustic focus. At a noisy party, for example, we can focus our attention on a person in a different conversation group to “listen-in” on what he or she is saying. While driving a car, we can divert our attention from the music on the radio to focus on a talker from the back seat. Our listening intentions often are different in quiet versus noise, when we are outside versus in our homes, or when we are moving versus when we are still. As hearing technology improves, efforts continue to be made to automatically achieve the best possible match between the brain’s intentions and the hearing aid’s processing.

As recently as the 1960s, it was common for individuals to be fitted with hearing aids that had a single processing scheme designed for all occasions. There were no user controls other than volume adjustment. This changed in the early 1970s, with the introduction of directional microphone technology. One of the first directional hearing aids had a slider on top of the BTE case, which allowed the patient to change the polar pattern in small increments going from 100% omnidirectional to 100% directional—one of the first attempts to link listening intention to the processing of the hearing aid, albeit not automatically.

In the years that followed, it became common for hearing aids to have a toggle switch or a button which allowed for switching between omnidirectional and directional. Unfortunately, for a variety of reasons, many patients did not utilize this feature and simply used only the omnidirectional program.2

With the introduction of digital hearing aids, instruments that automatically switched between omnidirectional and directional processing became common in the early 2000s.In the years that followed, we saw the development of automatic adaptive polar patterns, allowing the null to track a moving noise source,directional focus to the back and to the sides,5,6 and more recently, narrow directionality using bilateral beamforming.Again, all of these features were developed to match the hearing aid user’s probable intent for a given listening situation. So what is left to do?

New Signal Processing

One area of interest centers on improving the importance of functions given to speech and other environment sounds when originating from azimuths other than the front of the user, particularly when background noise is present; in other words, identification and interpretation of the acoustic scene. To address this issue, an enhanced signal classification system recently was developed for the new Signia Xperience hearing aids. This approach considers such factors as overall noise floor; distance estimates for speech, noise, and environmental sounds; signal-to-noise ratios; azimuth of speech; and ambient modulations in the acoustic soundscape.

A second addition to the processing of the Xperience product—again to hopefully mimic the intent of the hearing aid user—was to include motion sensors to assist in the signal classification process, leading to a combined classification system named “Acoustic-Motion Sensors.” The acceleration sensors conduct three-dimensional (3D) measurements every 0.5 milliseconds. The post-processing of the raw sensor data occurs every 50 milliseconds and is in turn used to control the hearing aid processing.

In nearly all cases, when we are moving, our listening intentions are different than when we are still; we have an increased interest in what is all around rather than a specific focus on a sound source. Using these motion sensors, the processing of Xperience is effectively adapted when movement is detected.

To evaluate the patient benefit of these new processing features, two research studies were conducted to:

1) Evaluate the efficacy of the algorithms in laboratory testing, and

2) Determine the real-world effectiveness using ecological momentary assessment (EMA).

Laboratory Assessment of Acoustic-Motion Sensors

The participants were 13 individuals with bilateral, symmetrical downward-sloping mild-to-moderate hearing loss (6 males, 7 females), ranging in age from 26 to 82 (mean age 60). All were experienced users of bilateral amplification and their mean hearing loss was 30 dB at 250 Hz, sloping to 64 dB at 6000 Hz.

The participants were fitted bilaterally with two different sets of Signia Pure RIC hearing aids, which were identical except that one set had the new acoustic scene classification algorithm as well as motion sensors. The hearing aids were programmed to the Signia fitting algorithm using Connexx 9.1 software, and fitted with double domes.

The participants were tested in two different listening situations. For both situations, ratings were conducted on 13-point scales ranging from 1 (Strongly Disagree) to 7 (Strongly Agree), including mid-point ratings. The ratings were based on two statements related to different dimensions of listening:

1) Speech understanding: “I understood the speaker(s) from the side well,” and

2) Listening effort: “It was easy to understand the speaker(s) from the side.”

Scenario #1 (Restaurant). This scenario was designed to simulate the situation when a hearing aid user is engaged in a conversation with a person directly in front, and unexpectedly, a second conversation partner, who is outside the field of vision, enters the conversation. This is something that might be experienced at a restaurant when a server approaches. The target conversational speech was presented from 0° degree azimuth (female talker; 68 dBA) and the background cafeteria noise (64 dBA) was presented from four speakers surrounding the listener (45°, 135°, 225° and 315°). The unexpected male talker (68 dBA) was presented randomly, originating from a speaker at 110°. The participants were tested with the two sets of instruments (ie, new processing On vs Off). After each series of speech signals from the talker from the side, the participants rated their agreement using the scale described earlier.

Scenario #2 (Busy street with traffic). This scenario was designed to simulate the situation when a person is walking on a sidewalk on a busy street with traffic noise (65 dBA) and a conversation partner on each side. The azimuths of the traffic noise speakers were the same as for Scenario #1, and for this testing, the motion sensor was either On or Off (although the participant was seated, the motion sensor was activated to respond as if the participant was moving for the test condition). The participant faced the 0° speaker, with the speech from the conversational partners coming from 110° (male talker) and 250° (female talker) at 68 dBA. The rating statements and response scales were the same as used for Scenario #1.

Results

In the restaurant scenario, participants had little trouble understanding the conversation from the front, with median ratings of 6.5 (maximum=7.0) for both instruments. There was no significant difference between the two types of processing (p>.05) for this talker from the front. For the talker from the side, however, there was a significant advantage (p<.05 for processing on speech understanding and also ease of listening>Figure 1).

Figure 1. Restaurant Condition: Mean ratings and 95th percentile confidence values for both speech understanding and listening effort for the speaker from the side. The 13-point scale was from 1=Strongly Disagree to 7=Strongly Agree. The participant (surrounded by cafeteria noise; 64 dBA), while listing to a conversation originating from 0°, rated a talker that randomly spoke from 110° (SNR= +4 dB). The asterisk indicates significance at p<.05.

Figure 1. Restaurant Condition: Mean ratings and 95th percentile confidence values for both speech understanding and listening effort for the speaker from the side. The 13-point scale was from 1=Strongly Disagree to 7=Strongly Agree. The participant (surrounded by cafeteria noise; 64 dBA), while listing to a conversation originating from 0°, rated a talker that randomly spoke from 110° (SNR= +4 dB). The asterisk indicates significance at p<.05. id="attachment_62316" align="alignnone" width="358" href="https://hearingreview.com/wp-content/uploads/2019/11/Froehlich_Fig2.jpg"> Figure 2. Traffic Condition: Mean ratings and 95th percentile confidence values for both speech understanding and listening effort. The 13-point scale was from 1=Strongly Disagree to 7=Strongly Agree. The participant, surrounded by background traffic noise (65 dB SPL), provided ratings for talkers randomly originating from either side (110° and 250°, SNR=+3dB). The asterisk indicates significance at p<.05. Figure 2. Traffic Condition: Mean ratings and 95th percentile confidence values for both speech understanding and listening effort. The 13-point scale was from 1=Strongly Disagree to 7=Strongly Agree. The participant, surrounded by background traffic noise (65 dB SPL), provided ratings for talkers randomly originating from either side (110° and 250°, SNR=+3dB). The asterisk indicates significance at p<.05. the mean results for traffic scenario are shown in>Figure 2. Recall that in this case, the participant was surrounded by traffic noise (SNR=+3 dB) and had conversation partners on either side (110° and 250°). This listening situation was somewhat more difficult, and therefore, overall mean ratings were slightly below the Restaurant scenario, but the same general pattern emerged. That is, when the new signal classification strategies were implemented, performance was significantly better (p<.05 for both speech understanding and listening effort. each condition the participants were also asked if they would recommend product that was just tested to a friend. rating scale used: no definitely yes. significant advantage observed processing on sensors on. we observe individual ratings of yes find positive recommendation restaurant off traffic motion>Real-World Effectiveness
While the positive findings from the laboratory data for the new types of processing were encouraging, it was important to determine if these patient benefits extend to real-world hearing aid use. A second study, therefore, was conducted with the Xperience product involving a home trial.

The 35 participants (19 males, 16 females) in the study all had bilateral symmetrical downward-sloping hearing losses and were experienced users of hearing aids (average experience was 6 years). Their mean audiogram ranged from 29 dB at 500 Hz sloping to 62 dB at 4000 Hz. The participants, recruited from four different hearing aid dispensing offices, ranged in age from 37 to 86 years, with a mean age of 68.5 years.

The participants were fitted bilaterally with Signia Xperience Pure 312 7X RIC instruments, with vented click-sleeve ear coupling. The hearing aids were programmed to the patient’s hearing loss using the Signia Xperience fitting rationale.

The participants rated their hearing aid experience during the one-week field trial using ecological momentary assessment (EMA). That is, during or immediately after a real-world listening experience, ratings for that experience were conducted. The EMA app linked the participants’ smart phone to the Signia hearing aids and logged responses during the time that participants were answering a questionnaire. The primary EMA questions covered seven different listening environments, the actions of the user (still or moving), and the users’ perceptions for the situation. The participants were trained on using the app prior to the home trial.

Results

For the analyses, questionnaires that were only started, or not fully competed were eliminated, resulting in 1,938 EMAs used for the findings reported here (average of 55 per participant for the week-long trial). As discussed earlier, one of the primary new features of Xperience is the motion sensors that are integrated into the hearing aids. To evaluate the effectiveness of this feature, EMAs were examined for three different conditions of speech understanding in background noise, when the participants reported that they were moving: 1) noise in the home (136 EMAs); 2) noise inside a building (153 EMAs), and 3) noise outside (31 EMAs).

The participants rated their ability to understand in these situations on a 9-point scale, ranging from 1=Nothing, 5=Sufficient, to 9=Everything. We could assume that even a rating of #5 (“Sufficient”) would be adequate for following a conversation, but for the values shown in Figure 3, we combined the ratings of #6 (Rather Much) and higher. As would be expected, the understanding ratings for in the home were the highest, but for all three of these difficult listening situations—understanding speech in background noise while walking—overall understanding was good. The highest rating of “Understand Everything” on the 9-point scale was given for 60% of the EMAs for home, 62% for inside a building, and 39% for outside.

[caption id="attachment_62317" align="alignnone" width="353"] Figure 3. Listening in noise while moving in the home, inside a building, and outside: Percentages represent combined understanding ratings of #6 (Rather Much) or higher (9-point scale) for the EMA questions related to understanding speech in background noise while moving. Results shown for in the home (136 EMAs), in a building (153 EMAs), and when outside (31 EMAs). Figure 3. Listening in noise while moving in the home, inside a building, and outside: Percentages represent combined understanding ratings of #6 (Rather Much) or higher (9-point scale) for the EMA questions related to understanding speech in background noise while moving. Results shown for in the home (136 EMAs), in a building (153 EMAs), and when outside (31 EMAs).

A common listening situation that occurs while moving is having a conversation while walking down a busy street. For this condition, three EMA questions were central: Is the listening situation natural? Is the acoustic scene perception appropriate? What is the overall satisfaction for speech understanding? The first two of these were rated on a 4-point scale: Yes, Rather Yes, Rather No, and No. Satisfaction for speech understanding was rated on a 7-point scale similar to that used in MarkeTrak surveys: 1= Very Dissatisfied to 7=Very Satisfied.

The results for these three questions for the walking on a busy street with background noise condition are shown in Figure 4. Percentages are either percent of “Yes/Mostly Yes” answers, or percent of EMAs showing satisfaction (a rating of #5 or higher on the 7-point scale). As shown, in all cases, the ratings were very positive. Perhaps most notable was that 88% of the EMAs reported satisfaction for speech understanding for this difficult listening situation.

Figure 4. Listening on a busy street while moving: Percentages representing either the percent of Yes/Mostly Yes answers, or percent of EMAs reporting satisfaction (a rating of #5 or higher on the 7-point scale). The number of EMAs used for the analysis were 80 for Natural Impression, 63 for Acoustic Orientation, and 79 for Overall Satisfaction.

Figure 4. Listening on a busy street while moving: Percentages representing either the percent of Yes/Mostly Yes answers, or percent of EMAs reporting satisfaction (a rating of #5 or higher on the 7-point scale). The number of EMAs used for the analysis were 80 for Natural Impression, 63 for Acoustic Orientation, and 79 for Overall Satisfaction.

As discussed earlier, in addition to the motion sensors, there also was a new signal classification and processing system developed for the Xperience platform (Dynamic Soundscape Processing), with the primary goal of improving speech understanding from varying azimuths together with ambient awareness. Several of the EMA questions were geared to these types of listening experiences.

The participants rated satisfaction on a 7-point scale, the same as has been commonly used for EuroTrak and MarkeTrak. If we take the most difficult listening—understanding speech in background noise—the EMA data revealed satisfaction of 92% for Xperience. We can compare this to other large-scale studies. The EuroTrak satisfaction data for this listening category differs somewhat from country to country, but in all cases, falls significantly below Xperience. For example, the 2019 Norway data reveals only 51% satisfaction, the 2018 Germany satisfaction rate was 64%, and the 2018 UK satisfaction was 69%.

The findings of MarkeTrak 10 recently became available, and it is therefore possible to compare the survey results with Xperience to these survey findings. MarkeTrak 10 data used here for comparison were from individuals using hearing aids that were only 1 year old or newer. While the EMA questions were not worded exactly like the questions on the MarkeTrak 10 survey, they were very similar and therefore provide a meaningful comparison. Shown in Figure 5 are the percent of satisfaction (combined ratings for Somewhat Satisfied, Satisfied, and Very Satisfied) for overall satisfaction and for three different common listening situations. We did not have EMA questions differentiating small groups from large groups, but MarkeTrak 10 does: 83% satisfaction for small groups and 77% for large groups. What is shown for MarkeTrak for this listening situation on Figure 5 is 80%, an average of the two group findings. In general, satisfaction ratings for Xperience were very high, and exceeded those from MarkeTrak 10, even when compared to the rather strong baseline for hearing aids that were less than 1 year old and even though most of the EMA questions were answered in situations with noise.

Figure 5. Shown is the percent satisfaction for the Xperience EMAs, compared to MarkeTrak10 findings, for three different listening situations and for overall satisfaction. Overall satisfaction=1938 EMAs, Satisfaction in one-to-one=564 EMAs, group conversations=151 EMAs and conversations in noise=598 EMAs.

Figure 5. Shown is the percent satisfaction for the Xperience EMAs, compared to MarkeTrak 10 findings, for three different listening situations and for overall satisfaction. Overall satisfaction=1938 EMAs, Satisfaction in one-to-one=564 EMAs, group conversations=151 EMAs, and conversations in noise=598 EMAs.

Summary

As technology advances, we continue to design hearing aid technology that more closely resembles the listening intent of the user. This might involve focus on speech other than that which is in front, enhanced ambient awareness, and also the specific listening needs when the hearing aid user is moving. The Signia Xperience provides very encouraging results in all of these areas. Laboratory data show significantly better speech understanding for speech from the sides, both when stationary and when moving. Real-world studies using EMA methodology reveal highly satisfactory environmental awareness, and higher overall user satisfaction ratings than have been obtained for either EuroTrak or the recent MarkeTrak10. Overall, for both efficacy and effectiveness, the performance of the Signia Xperience hearing aids was validated, and increased patient benefit and satisfaction is expected to follow.

Screen Shot 2019-11-07 at 6.36.40 PM

Matthias Froehlich, PhD, is head of Audiology Marketing at Sivantos GmbH in Erlangen, German. Eric Branda, AuD, PhD, is Director of Research Audiology for Sivantos US in Piscataway, NJ. Katja Freels, DIPL.ING., is a research and development audiologist at Sivantos GmbH with responsibilities that include the coordination of clinical studies and research projects.

CORRESPONDENCE can be addressed to: [email protected].

Citation for this article: Froehlich M, Branda E, Freels K. New dimensions in automatic steering for hearing aids: Clinical and real-world findings. Hearing Review. 2019;26(11):32-36.

References

  1. Froehlich M, Freels K, Powers TA. Speech recognition benefit obtained from binaural beamforming hearing aids: Comparison to omnidirectional and individuals with normal hearing. https://www.audiologyonline.com/articles/speech-recognition-benefit-obtained-from-14338. Published May 28, 2015.

  2. Cord MT, Surr RK, Walden BE, Olson L. Performance of directional microphone hearing aids in everyday life. J Am Acad Audiol. 2002;13:295-307.

  3. Powers T, Hamacher, V. Three-microphone instrument is designed to extend benefits of directionality. Hear Jour. 2002;55(10):38-45.

  4. Ricketts T, Hornsby B, Johnson E. Adaptive directional benefit in the near field: Competing sound angle and level effects. Seminars in Hearing. 2005;26(2):59-69.

  5. Mueller HG, Weber J, Bellanova M. Clinical evaluation of a new hearing aid anti-cardioid directivity pattern. Int J Audiol. 2011;50(4):249-254

  6. Chalupper J, Wu Y-H, Weber J. New algorithm automatically adjusts directional system for special situations. Hear Jour. 2011;64(1):26-33.

  7. Herbig R, Froehlich M. Binaural beamforming: The natural evolution. Hearing Review.2015;22(5):24.