Tech Topic | December 2015 Hearing Review

The current study assessed the capability of Phonak AutoSense OS to provide optimal and preferable listening in three complex, real-life listening environments:  1) A highly echoic room with presentation of speech from behind and noise interference, 2) a running car, and 3) a busy coffee shop. The results showed significantly higher speech recognition scores in the car and the echoic room in AutoSense OS compared to a manual program selected by each participant as his or her “favorite” for that particular environment. Also, participants had a strong subjective preference for the AutoSense OS program, compared to the “favorite” manual program across all three challenging listening environments. These findings suggest that AutoSense OS not only provides significantly greater ease of use than a manual program, but can also provide optimal hearing in challenging, realistic listening environments.

The need to change programs or volume in hearing instruments to manage performance in different acoustic environments has been shown to be challenging and undesirable for hearing aid users. Phonak AutoSense OS automates hearing aid behavior by performing a real-time analysis of the acoustic surroundings and adjusts the hearing aid parameters to optimize speech understanding and comfort.

The following study was designed to assess the capability of AutoSense OS to provide optimal and preferable listening in three complex, real-life listening environments.

Introduction

People encounter a wide variety of acoustic environments every day. Hearing aid users, therefore, need devices that can accommodate a diverse range of acoustic environments. One hearing aid program to accommodate all encountered listening situations is not sufficient nor reasonable, since the signal processing needs can differ drastically depending on the acoustic characteristics present.

Since the advent of digital hearing aids, the ability of hearing aids to detect acoustic characteristics from the surrounding environment—and change the hearing aid program or features accordingly—has become increasingly more advanced and accurate. The need for an automatic program is evident, given the fact that many hearing aid users may be unsuccessful at changing hearing aid manual programs appropriately.

Automatic classification systems were originally developed for uses outside the field of audiology. Previously used for security, voice recognition, and military applications, classification systems were implemented into hearing aids in the 1990s.1 In a hearing aid application, the classification system works similarly to the human auditory system; utilizing particular characteristic features to classify the environment.2 The algorithms that drive hearing aid classification are complex; in addition to feature extraction, there are also statistical rules in place that ultimately determine the final gain model and signal processing applied.

A 3rd-generation automatic system. AutoSense OS, the third iteration of an automatic system from Phonak, is substantially more flexible, precise, accurate, and advanced than previous Phonak automatic systems. AutoSense OS performs its classification through a complex 35-feature extraction of the auditory environment, which then translates these acoustic cues into meaningful changes within the feature and gain model activity of the hearing aid.

The number of scenes that can be classified have increased in number and specificity. AutoSense OS currently has the ability to classify seven acoustic environments, including music and several other varieties of “noisy” environments. “Speech in Loud Noise,” “Speech in Car,” and “Music” are specific or “exclusive” sound environments that the hearing aid can detect accurately and adjust gain parameters, feature settings, and microphone configuration to optimize, accordingly. All other classifications are calculated based on proportions of certain acoustic parameters detected in the environment, and up to three scene classifications can be combined simultaneously. Due to proportional mixing of programs, AutoSense OS has the ability to activate over 200 unique and audibly different hearing aid settings.

An extensive smoothing progression is done to ensure that there are no abrupt program changes that would be audible to the hearing aid user, and data-streaming between hearing aids ensures symmetry in exclusive hearing aid program settings across the two devices. Additionally, the proprietary chip in Venture hearing aids allows for better battery consumption and faster processing speeds to facilitate a consistent transition and optimization of hearing aid programs in real time.

Benefits of automatic systems. The benefits of AutoSense OS are multi-faceted. First, by adjusting the gain model in real time, the hearing aid user can always be in an acoustically optimized setting. This technology frees the hearing aid user from making decisions about appropriate hearing aid program, precluding the action of manual button-pushing.

A study by Desjardins and Doherty3 at Syracuse University in 2009 used the Practical Hearing Aid Skills Test (PHAST) to assess competency in areas related to hearing aid care, use, and maintenance. Experienced hearing aid users were asked to demonstrate these tasks as part of the Skills Test, and the results showed that activation of the manual “noise” program was one of the most difficult tasks for these experienced hearing aid users to demonstrate.

This finding indicates the importance of a hearing aid’s automatic capability, given the potential unreliability of a hearing aid user to comply to this type of instruction, even with confirmed understanding of this task. Given the changing environments and demands of attention in the real world, it is unrealistic to think that hearing aid users could, or should, be responsible for changing the hearing aid program during the course of their everyday lives.

Further, recent consumer research data shows that the drivers of hearing aid satisfaction center on sound quality, value, and the effectiveness of enhanced features. When it comes to sound quality, the clarity of sound, degree of naturalness, fidelity, and richness are most influential in impacting satisfaction.4 Therefore, it can be assumed that, the greater capability and accuracy with which the hearing aid can make volume or program adjustments to optimize comfort and clarity, the better the listening experience for the hearing aid user.

Additionally, research indicates that switching into a noise program is not only difficult or dissatisfying for hearing aid users, but the use of one noise program may not be appropriate or accurate enough to provide optimal listening in realistic use cases. Traditionally, hearing aid users were equipped with one noise program that activated the directional microphones to serve in all “difficult” or “noisy” listening environments, and used the “default” or “quiet” program for all other listening situations. Research and anecdotal evidence suggest that hearing aid users operate within a wide range of listening environments, that do not fall within the extremes of “noise” or “quiet.”

A 2015 research study by Taylor and Hayes5 suggests that classification of environments as either “quiet” or “noise” could lead to a significant misclassification of a hearing aid program—and potentially substandard hearing. Therefore, the need for a greater number of hearing aid programs to optimize hearing performance in a wider range of listening environments is necessary, and many manufacturers of hearing aids have responded to this need with programs optimized for echo, outdoors, wind, and other listening environments.

Figure 1

Figure 1. Start-up program applied in hearing aid fittings.

However, as the number of acoustically optimized programs increases, the hearing aids become increasingly complex to use, and the likelihood rises of the user activating programs that may be disadvantageous. Therefore, the need for an automatic system that can accurately adapt parameters based on the environment is a requirement with the increasingly complex signal processing capabilities of today’s hearing instruments.

Based on nearly 150,000 adult hearing aid fittings collected by Phonak, 92% of all fittings applied the automatic as the start-up program (Figure 1). The apparent consistency of use for this program shows the desire and need for an automatic program. Based on this data, AutoSense OS is the ubiquitous choice when fitting Phonak devices.

A research study6 performed at the University of Luebeck provided insight into the capability and accuracy of the AutoSense OS system. The investigators designed four listening setups in the sound booth, each created to fully activate a specific hearing aid program when the hearing aid was set to AutoSense OS. Then, the investigators found the favorite manual program for each research participant, in each of the four listening setups. Speech reception threshold (SRT) in noise was compared to the “favorite” manual program and the program “selected” by AutoSense OS. Results revealed significantly greater speech performance in the program selected by AutoSense OS as compared to the program selected by the each participant as his or her “favorite.” This study revealed two main findings:

1) Research participants were not accurate at selecting the hearing aid program in which they would achieve superior speech understanding, and

2) Performance in the program selected by AutoSense OS allowed improved function in noisier environments when compared to the program selected by the research participant(s).

This study at the University of Luebeck was valuable in that it showed the capability of AutoSense OS to provide improved understanding in noise over a manual program selected by the participant. However, it was limited in that the listening scenes were simulated in a sound booth and designed with the intended purpose of activating specific programs in AutoSense OS.

Field Study

Figure 2

Figure 2. Average audiogram for 14 participants in the AutoSense OS study. Error bars representing standard deviation at each frequency are also shown.

The present study, performed at the Phonak Audiology Research Center (PARC), was intended to build upon the study at the University of Luebeck by evaluating the true capability of AutoSense OS in challenging, real-world listening environments. This study took place in three environments that are frequently reported as challenging by hearing aid users. The methodology was designed to make each real-life environment as highly controlled and repeatable as possible. The purpose of the study was specifically to determine if hearing aid users prefer a manual hearing aid program of their choice or the program selected by AutoSense OS, and determine if hearing aid users are able to understand speech better in their chosen manual hearing aid program or the program selected by AutoSense OS in real-world environments. Similar or better preference and performance in AutoSense OS would provide confidence that the automatic mode can be used for Phonak hearing aid users for superior performance and the greatest ease of use.

A total of 14 adult participants ranging from 21 to 85 years of age (X = 65; SD = 16) with mild-to-moderately severe sensorineural hearing impairment participated in this study. Figure 2 shows mean pure-tone air conduction thresholds measured for the group. All participants were native English speakers, and were recruited through IRB-approved flyers. This study was conducted under the approval from the Western Institutional Review Board (WIRB).

Hearing Aid Fitting

Each participant was fit with a set of Phonak Venture (V-90) receiver-in-canal (RIC) devices with size 13 batteries and power domes. Hearing aids were programmed based on an audiogram performed within the last 6 months. Phonak Target 4.1 fitting software was used with the NAL-NL2 prescription and a gain of 100%. Coupling was set to power domes in the software, and SoundRecover was disabled for all participants.

Startup Program

AutoSense OS

Manual Program 1

Calm Situation

Manual Program 2

Speech in Noise

Manual Program 3

Speech in Loud Noise

Manual Program 4

Comfort in Echo

Manual Program 5

Speech in Car

The AutoSense OS program and five separate manual programs shown in Table 1 were saved in the hearing aids for testing and comparison. Settings were also saved to the ComPilot II accessory, which was used throughout the experiment to enable program switching via the Remote Control App.

Real-ear probe microphone measurements were performed in the “calm situation” program with the AudioScan Verifit2 system to verify appropriate amplification and audibility across all frequencies. REIG was adjusted to within ±5 dB of NAL-NL2 targets.

Test Setup

This study was conducted in three challenging, real-life listening environments: 1) An open living space in PARC set up as a small apartment; 2) A car, and 3) A coffee shop. The IEEE sentences were used for this investigation as the speech perception measure. These sentences were presented from a Bose SoundLink mini speaker connected via the 3.5mm auxiliary port to the headphone jack of a smartphone on which the IEEE sentences were saved as a playlist. The sentence lists were randomized for each participant.

The presentation level of the sentences was set at a pre-calibrated level for each of the three listening environments. The description of the speech presentation level, noise source, and noise level is provided below with the description of each test environment. When the noise source was provided by the environment (ie, coffee shop and car), the noise level was recorded throughout the entire participant testing session in that environment to ensure a consistent overall dB level across participants.

Two experimenters participated in every test session. All program switching was performed by one experimenter using the Phonak Remote Control App. This application connects via Bluetooth to a ComPilot II relay worn around the neck of the participant. The ComPilot II communicates with the hearing instruments by near field induction. Communication between the aids and remote is two-way, so accurate program changes can be visualized and confirmed on the ComPilot II by the experimenter,

This experiment was double-blinded in that one experimenter was always responsible for switching the hearing aid program, and the other experimenter was always responsible for scoring of the IEEE sentences. This ensured that the experimenter scoring did not know what program the participant was in at any given time. The participant also did not know what program their hearing aids were set to at any given time and the confirmation tones were disabled in the hearing instruments so that participants would not get information about the selected program.

Description of Test Environments

Figure 3

Figure 3. Schematic of Listening Loft test setup used for comparison between AutoSense OS and a manual hearing aid program.

Listening loft. The Phonak Audiology Research Center (PARC) is equipped with a room called the Listening Loft that is designed to look and feel like the first floor of an apartment. Fully equipped with a sink, kitchen table, and living room area, it is a realistic environment in which to assess hearing aid products and features. It also has curtains around the room that, when pulled back, can result in a moderate amount of reverberation (RT = 0.8 s), and a wireless speaker system (SONUS) that allows for presentation of recorded audio tracks from various points around the room.

The schematic for the test setup and room configuration is shown in Figure 3. The participant was seated at the end of a kitchen table situated on the one side of the Listening Loft, facing the table. Restaurant noise at 50 dB SPL was presented from a speaker on top of the kitchen counter at 40° azimuth relative to the participant. Speech was presented behind the participant at 225° azimuth from the wireless Bose speaker. This environment was designed to emulate a situation in which a family member or friend is speaking from the next room, concurrently with some soft noise interference.

Figure 4

Figure 4. Schematic of car test setup used for comparison between AutoSense OS and a manual hearing aid program.

Car. The schematic for the test setup and configuration for the car is shown in Figure 4. The same 2015 mid-size car was used for testing in the car environment for each participant. The speech testing always took place along the same stretch of road located in an office park. Speed was kept consistent at 30 mph throughout the testing, and the air conditioner level was set at 3 for all participants. Windows were always up for testing, and speech testing never took place when it was raining or when the road was wet.

The participant sat in the passenger seat of the car, and Experimenter 1 drove while holding the Bose speaker at the level of her mouth, with the speaker facing the windshield. The participant was instructed to face forward towards the windshield throughout all speech testing. The engine and road noise sources were consistently measured at 60 dBA. Speech was presented from the driver’s seat at 60 dB SPL.

Figure 5

Figure 5. Schematic of coffee shop setup used for comparison between AutoSense OS and a manual hearing aid program.

Coffee shop. The third test environment took place at a coffee shop in downtown Naperville, Ill. This particular coffee shop was chosen because it is consistently very bustling and noisy at all hours of business. Testing was completed at the same table for each participant (Figure 5). Speech was presented from directly across the table at the level of Experimenter 1’s mouth at the maximum level of the speaker, and the background noise was consistently measured at an average of 80 dBA.

Procedure

Following programming and verification of the hearing aids, testing was completed in each of the three listening environments outlined above. The order of the environments was randomized for each participant.

The first task in each listening environment was to find each participants’ “favorite” manual program for that particular environment. There were three potential “favorite” program options allocated for each test environment that were deemed “most appropriate” for that particular environment. For example, the manual programs presented as options for the favorite in the Listening Loft were: “Calm Situation,” “Comfort in Echo,” and “Speech in Noise.” The three program options for the car were: “Speech in Car,” “Speech in Noise,” and “Comfort in Echo.” The three program options for the coffee shop were “Speech in Noise,” “Speech in Loud Noise,” and “Comfort in Echo” (see Table 2 for a summary of the available manual program options for each test environment).

Listening Loft

Car

Coffee Shop

Calm Situation

Speech in Car

Speech in Noise

Comfort in Echo

Speech in Noise

Speech in Loud Noise

Speech in Noise

Comfort in Echo

Comfort in Echo

The “favorite” program was determined through a paired comparison task, in which the participant was asked to listen to two sentences in each of the first two manual program options, and report which one was “best.” The participant then listened to an additional two sentences in this reported program, and two sentences in the third program that was not yet tested and report which was “best.” The final program reported was recorded as that participant’s “favorite.” The presentation order of the three programs was randomized for each participant.

Speech testing in each environment always began with an adaptation period in which the hearing aids could adjust any adaptive features based on noise levels or other environmental acoustics. Participants were presented with one list of 20 IEEE sentences in his or her reported “favorite” manual program, and one list of 20 IEEE sentences with the hearing aid set to the AutoSense OS program. Presentation in the “favorite” or the AutoSense OS program was randomized for each participant so that some started by repeating a list of sentences in the “favorite” and some started by repeating a list of sentences in the AutoSense OS program. Participants were instructed to repeat back as much of the sentences as they could, and to take a guess if they were not sure. The number of words correct were counted, and taken as a percentage out of the total number of words for the sentence list.

Immediately following the two lists of IEEE sentences (one list in each program), the participants were asked to complete a questionnaire asking which of the two programs they preferred relative to comfort, sound quality, background noise suppression, and speech understanding. They were also asked to choose which of the two (AutoSense OS or the manual program) they liked best, overall, for each particular listening environment. (A copy of the questionnaire is available upon request from the authors.)

Results

Figure 6

Figure 6. Average IEEE sentence score in all three acoustic environments for both the AutoSense OS program and the manual “favorite” program. * = significant difference at 0.05 p- value. Standard error bars are displayed.

Speech Recognition. The speech recognition scores on the IEEE sentence test were averaged across participants for each test environment, yielding an average score for AutoSense OS and an average score for the manual program in each of the three environments. Figure 6 shows the mean speech recognition scores in each of the two programs, across each test environment.

The error bars represent the standard error of the mean. A repeated measures analysis of variance (ANOVA) revealed a significant effect of the hearing aid program in the Listening Loft environment (F1, 15.77 = 5.61, p < 0.05) and the car environment (F1, 7.2 = 2.10, p < 0.05). The results revealed significantly better performance in the AutoSense OS program than the manual program for both the Listening Loft and the car environments. There was not a statistically significant difference between the two programs for the coffee shop environment, but a strong trend towards better performance in the AutoSense OS program as compared to the manual program.

Individual participant speech recognition scores are also plotted for all three listening environments in Figures 7a-c. These graphs allow for visualization of the trend across individual participants that may not be apparent when looking at the average speech recognition scores.

Figure 7a

Figure 7b

Figure 7c

Figure 7a-c. Individual participant speech recognition scores on IEEE sentence test for each testing environment: Listening Loft, car, and coffee shop. Bars indicate % words correct.

 

A total of 12 out of 14 participants had the same or better speech recognition score in AutoSense OS than the manual program when tested in both the Listening Loft and coffee shop environments. Similarly, 13 out of 14 participants had the same or better speech recognition score in AutoSense OS than the manual program when tested in the car environment. This indicates that the vast majority of participants had better speech recognition performance while listening in AutoSense OS than a manual program across the wide variety of listening environments.

Subjective questionnaire. The questionnaire filled out for each listening environment was designed as a 5-point scale. If the participant rated the AutoSense OS program and the manual program equally, this answer was assigned a value of 0. If the participant rated the AutoSense OS program as slightly better, this was assigned a value of 1, and a strong preference for AutoSense OS was assigned a value of 2. Similarly, if there was a weak preference for the manual program, this was assigned a value of -1, and a strong preference for the manual program was assigned a value of -2.

Figure 8a Figure 8b

Figure 8c

Figure 8a-c. Results of the subjective questionnaire that asked participants to compare their listening experience and perceptions for all three listening environments in AutoSense OS with the manual “favorite” program. A score of 0 on the graph indicates no preference between AutoSense OS and the preferred manual. A score above 0 indicates a stronger preference for AutoSense OS, and a score below 0 indicates a strong preference for the preferred manual hearing aid program. Standard error bars are displayed.

 

The average ratings across all participants and listening environments are shown in Figure 8a-c. These findings indicate a strong preference for the AutoSense OS program across all domains for all three listening environments.

Discussion

The current study investigated the capability of automated hearing instrument behavior to provide superior hearing performance compared with manually selected programs by experienced hearing aid users in three complex, real life listening environments. A program like AutoSense OS is strongly preferable, and could be considered necessary, over the manual program-switching that is sometimes recommended to hearing aid users. The use of AutoSense OS program removes the responsibility of thinking about the listening environment and changing the hearing aid program, as this feature is designed to analyze the acoustics of the environment and optimize the hearing aid signal processing accordingly.

The ability of AutoSense OS to provide superior listening as compared to a manual program in the sound booth has been demonstrated by a previous research study.6 However, the question remained as to whether the AutoSense OS feature was able to accurately change the hearing aid program in real-world listening environments in a way that provided optimal listening and higher subjective preference over a manual program. The current study, which took place in three different listening environments (a reverberant environment, a car, and a coffee shop), asked participants to select their favorite manual program, and then assess speech recognition performance in that manual program as well as in AutoSense OS. Subjective data was also collected through a questionnaire that asked participants to rate their preference between the AutoSense OS program and the manual program.

Results revealed significantly better speech performance in AutoSense OS as compared to the manual program in both the reverberant environment and the car environment. There was a strong trend towards superior performance in AutoSense OS in the coffee shop environment. All subjective results indicated a strong preference for AutoSense OS for listening in all three environments. The findings indicate that AutoSense OS is able to effectively optimize hearing aid parameters in accordance with the surrounding environment, and does so in a way that leads to patient preference and speech intelligibility over a “preferred” manual program.

Conclusion

The results of this study suggest that AutoSense OS can be used to not only provide the greatest ease of use for hearing aid users over manual hearing aid programs, but it can also be relied on to provide settings that lead to great potential for speech understanding and patient comfort. Results specifically indicated that the vast majority of participants had better speech recognition performance while listening in AutoSense OS than a “preferred” manual program across the wide variety of listening environments. The subjective findings further strengthened these results by showing a strong subjective preference for AutoSense OS across differing listening environments. This indicates that the adjustment and adaptation of hearing aid parameters facilitated by AutoSense OS does not optimize speech intelligibility at the expense of sound quality.

The technological advancements of AutoSense OS allow for the hearing aid to precisely detect the acoustic features of the surrounding environment, and adjust the gain model and feature settings based on this information. The current study shows that, beyond accuracy and fine-tuned detection of environments, the resulting hearing aid settings can optimize speech understanding, as well as comfort and sound quality.

References

  1. Büchler MC. Algorithms for sound classification in hearing instruments. PhD thesis, Swiss Federal Institute of Technology, Zurich, Switzerland;2002.

  2. Bregman AS. Auditory Scene Analysis: Hearing in Complex Environments. Cambridge, Mass: MIT Press;1990.

  3. Desjardins JL, Doherty KA. Do experienced hearing aid users know how to use their hearing aids correctly? Am J Audiol. 2009;18:69-76.

  4. Abrams H, Kilm J. An introduction to MarkeTrak IX: A new baseline for the hearing aid market. Hearing Review. 2015;22(6)[June]:20.

  5. Taylor B, Hayes D. Does current hearing aid technology meet the needs of healthy aging? Hearing Review. 2015;22(2)[Feb]:22-26.

  6. Latzel M. AutoSense OS: Benefit of the next generation of technology automation. Field Study News. 2015. Warrenville, Ill: Phonak.

Lori Rakita

Lori Rakita, AuD

Lori Rakita, AuD, is a Research Audiologist.

Christine Jones

Christine Jones, AuD

Christine Jones, AuD, is Director of Pediatric Clinical Research at Phonak Audiology Research Center (PARC) in Warrenville, Ill.

Correspondence can be addressed to HR or Dr Rakita at: [email protected]

Original citation for this article: Rakita L, Jones C. Performance and Preference of an Automatic Hearing Aid System in Real-World Listening Environments. Hearing Review. 2015;22(12):28.