Tech Topic | November 2015 Hearing Review

A Q&A about the new Siemens Spatial SpeechFocus, which directs amplification toward specific spatial regions for better speech understanding.

Speech understanding and listening comfort in a variety of background noise situations have historically been key problems for people using hearing aids. The recent introduction of Siemens Spatial SpeechFocus is the culmination of several directional strategies for the enhancement of speech information and comfort in noise. Its purpose is to give hearing aid users another useful solution—automatic or manually controlled—in the unique listening environments they encounter. The following 10 questions and answers address this new important feature:

1) From a practical standpoint, what does “Spatial SpeechFocus” mean?

It refers to placing the focus of amplification toward a specific spatial region surrounding the user. Of course, we have had traditional directional amplification for decades. It’s well known that this places the focus of amplification toward the front hemisphere—the “look direction” of the user. When coupled with effective noise reduction, this can provide a substantial signal-to-noise (SNR) advantage (compared to omnidirectional) of around 6 dB when speech is from the front, and there is surrounding noise.1

Spatial Speech FocusThere are times, however, when we may want to hear someone behind us, and we are not able to turn around to face the talker; a typical example would be riding in the front seat of a car while conversing with someone in the backseat. To satisfy this listening situation, Siemens introduced the SpeechFocus algorithm several years ago, which is a reverse-cardioid amplification pattern. Research has revealed that this algorithm effectively improves speech recognition from the back when background noise is present.2,3

The combination of traditional adaptive directional, and the reverse-cardioid SpeechFocus provided solutions for the majority of listening situations; however, there was still the issue of understanding speech when the target talker was originating from the right or left side, and turning to face the talker is not a viable option. In fact, the most common communication in a car is talking to a companion to the side.

This led to the development of the Spatial SpeechFocus algorithm, which not only can focus to the front and back, but also allows for maximizing the amplification to either the right or left side while reducing noise from other azimuths. When added to the other directional features, there is now the option of having a 360° directional focus. We should mention that this side-look directional is only possible because of the advanced e2e Wireless full-audio binaural beamforming.

2) What do you mean by binaural beamforming?

This processing was explained in detail by Kamkar-Parsi et al,4 and the patient benefit for speech recognition using binaural beamforming also has been reported.5,6 Siemens introduced wireless communication between hearing aids over 10 years ago,7 but what we now have is binaural processing using full-audio wireless sharing between the two instruments. That is, for both the right and left hearing aid, the binaural beamformer takes as input the local signal, which is the monaural directional signal, and the contralateral signal, which is the monaural directional signal transmitted from the opposing hearing instrument (ie, via binaural wireless link e2e Wireless 3.0). The output of the beamformer is generated by intelligently combining the weighted local signal and the weighted contralateral signals.

The binaural audio transmission introduced with the Siemens binax platform enables not only beamforming for situations where the target speaker is in the front, but also allows for beamforming to the side. By suppressing noise from one side, and enhancing the target signal from the other side on both ears, this technology can increase listening comfort, as well as speech understanding.

3) Is Spatial SpeechFocus the same as the Narrow Directionality algorithm?

No, it is complementary. While both rely on binaural beamforming, there are some important differences. First, when Spatial SpeechFocus is implemented, the directivity pattern is broader than what you would have with Narrow Directionality. Narrow Directionality is only for the “look-direction”—the patient will not have Narrow Directionality to the sides or back, unless the patient turns to face those directions.

Second, the actual width of the directional pattern of Spatial SpeechFocus is more similar to that used with traditional directional processing. In contrast, Narrow Directionality usually is reserved for a listening situation when the patient knows that he or she will be communicating with only a few people located relatively close to each other in the frontal hemisphere—a typical conversation at a noisy restaurant. As with Narrow Directionality, Spatial SpeechFocus can adapt to a variety of listening situations, including a focus to either the left or right, or behind the patient.

4) Is this feature automatic?

Spatial SpeechFocus can be automatic, or controlled by the user. When controlled automatically, calculations are made by each hearing aid, and then compared to determine the source (azimuth) of a given speech signal. The beamformer is now able to focus to the front, back, left, or right directions, depending on the direction from where the target signal originates. If speech is coming from both sides, or if a “quiet” listening situation is detected, an omnidirectional focus is chosen.

In order to assure listener comfort, the recognition of the need for a different feature set should be as fast as possible; on the other hand, the transitioning should be slow and gradual enough so that the user cannot hear any artifacts associated with the changes. The switching is performed synchronously on both ears, using a smooth gradually fading transition—the patient does not hear a switching. Additionally, in all situations, the correct perception of the talker(s) remains because, for the most part, the binaural cues are maintained.

The automatic function of Spatial SpeechFocus can be achieved in two different ways. In the “Universal” program, the signal classification system of the hearing aids has the ability to classify different environments. The processing of the hearing aids will vary depending on the classification. One of the classifications is “car” (ie, based on the noise spectrum normally present when a car is in motion). When the car situation is detected, the hearing aids automatically switch to the Spatial SpeechFocus algorithm, as we know that when traveling in a car it is often necessary to hear speech from the sides or the back.

A second option is to provide the patient with a separate program that contains the Spatial SpeechFocus algorithm—this overrides the “car classification,” meaning that the focus to different directions will activate whenever speech-in-noise (at specified levels) is detected. The user needs to manually select this program option for a given listening situation. In the fitting software, this program option is referred to as “Stroll.”

5) Would it be reasonable for a patient to use the Spatial SpeechFocus program for all listening situations?

Probably not. This algorithm primarily is designed for listening situations when it is not possible (or reasonable) to look at the desired talker. As with any automatic hearing aid function, there is always the potential that the automatic decision is contradictory to what the patient desires.

For example, let’s say the user is in a dedicated Spatial SpeechFocus program, in a group party situation, and is facing his or her soft-speaking communication partner. If there is a loud talker from the back or off to the side, it’s possible the algorithm would focus on the loudest speech signal—in this case, the wrong talker. For situations like this, traditional directional, or Narrow Directionality would be the best choice, which automatically would be selected by the signal classification system, except when the user is in a dedicated Spatial SpeechFocus program.

6) What if the user wants to control the Spatial SpeechFocus settings?

We mentioned that patients can simply switch to a dedicated Spatial SpeechFocus program, but more specific control is through the patient’s use of his/her smart phone (see Varallyay et al8 for review). When adjustments are made in this manner, the patient uses the wireless Bluetooth adapter, called “EasyTek,” and a special smart phone app, which provides the “Spatial Configurator” application. (An application will soon be available that allows for direct phone-to-hearing-aid communication, and the Easy Tek relay will not be needed.)

The Spatial Configurator has two components: “Span” and “Direction.” The Span control adjusts the width of the focusing area around the hearing aid—which can be anything from a full 360° focus of the environment, to front-hemisphere only, to a very narrow focus to the front. Other features also are altered simultaneously with the Span control. For example, the noise reduction is increased and the non-speech-related amplification is reduced when the user zooms to the front.

As the name suggests, the Direction control allows the user to adjust the focus from the front, to the back, or to either side. The user interface for both the Span and the Direction control of the Spatial Configurator is intuitive and easy to operate. Whenever desired, the user can give back the steering control to the automatic system by simply pressing a button in the app display. In addition, there is a separate sub-system which checks after 15 minutes from activating the Spatial Configurator if the user is still in the same acoustic environment. If not, the device automatically takes back control, without the need of any user intervention.

7) When the focus is on speech from one side or the other, can the patient still localize other sounds in the environment?

The Spatial SpeechFocus algorithm was specifically designed to maintain localization (see Kamkar-Parsi4 et al for review). Consider that, when sound is coming from one side of the head, it will have two major differences in the two ears. First, it will arrive earlier at the ear closer to the sound source, which is referred to as the interaural time difference, or ITD. This difference is primarily for the lower frequencies. The second effect is the interaural level difference, or ILD, which has its greatest effect in the higher frequencies—they will be significantly softer in the ear opposite the sound source, which is used to determine location. Both ITD and ILD are utilized to create the beamforming algorithm. The major advantage of this approach (compared to simply copying the signal of the preferred ear to the other side) is that, in this application, spatial cues are maintained. That is, the user can still localize the sound and has a natural spatial impression.

8) Has localization research been conducted with individuals using this algorithm?

Yes, this was studied as part of a larger project at the University of Iowa. In this study, the participants were fitted bilaterally with the beamforming hearing aids, and the international speech test signal (ISTS) was presented continuously from a loudspeaker located at 90°; car noise was presented from five other loudspeakers surrounding the participant (0°, 45°, 135°, 225°, and 315°). This condition was designed to steer the algorithm to 90°. Localization was then assessed for a signal from 270° (signals were also presented from other azimuths); the stimulus for the localization task was a short speech sound (male talker saying the logatome “jipp”). These localization findings were compared to the participant’s localization ability in the same listening situation using a fixed omnidirectional setting. The results showed that, even when the Spatial SpeechFocus algorithm was steered to 90°, localization accuracy for a signal presented from 270° was equal to that obtained for omnidirectional processing.

9) Is there research evidence to support the speech recognition benefit of this technology?

Definitely. As we mentioned earlier, the general application of the reverse-cardioid pattern has been supported in several studies, both in the laboratory and in field trials.3 More specifically, regarding binaural beamforming and Spatial SpeechFocus, independent research recently was completed at the University of Iowa. In this research, the participants (n=25) were surrounded by background noise, and the target speech signal, the Connected Sentence Test9 was presented from either 90° or 180° locations. Testing was conducted with the Spatial SpeechFocus algorithm “on” versus “off.” The findings revealed a significant benefit with the algorithm “on” for speech recognition for both target speech locations, with an average improvement of 22%.

10) Are there specific patients that should be encouraged to use the Spatial SpeechFocus algorithm?

As we’ve already discussed, the most common use case is communication while driving or riding in the front seat of a car. That’s a listening situation experienced everyday by some patients, and at least occasionally by nearly everyone. But of course, there are other use cases for this feature, such as when the wearer is walking with someone and carrying on a conversation at the same time, which would require focus to the side, or when walking in a group, focus to the back might be necessary. Conversations like this frequently occur when shopping in a store with a partner. This situation relates to why the program was dubbed “Stroll.” There certainly are other speech-in-noise situations where a person might want to focus listening to one side or another (eg, sitting at a dinner table or a meeting with background noise present), and easily can do this using the smart phone app.

Spatial SpeechFocus is not designed for all listening situations, but rather is an application that is available when other algorithms do not provide an optimum solution. The patient can allow this feature to operate automatically, select a dedicated program for the feature, or take more precise control using their smart phones. Regardless of how it is controlled, research has shown that the end result will be improved speech recognition and listening comfort.

References

  1. Powers TA, Beilin J. True advances in hearing aid technology: what are they and where’s the proof? Hearing Review. 2013;20(1):32-39. Available at: https://hearingreview.com/2013/01/true-advances-in-hearing-aid-technology-what-are-they-and-where-s-the-proof-january-2013-hearing-review

  2. Mueller HG, Weber J, Bellanova M. Clinical evaluation of a new hearing aid anti-cardioid directivity pattern. Int J Audiol. 2011;50(4):249-54.

  3. Branda E, Beilin J, Powers T. Directional Steering for Special Listening Situations: Benefit Supported by Research Evidence. September 2014. Available at: http://www.audiologyonline.com/articles/directional-steering-for-special-listening-12974

  4. Kamkar-Parsi H, Fischer E, Aubreville M. New binaural strategies for enhanced hearing. Hearing Review. 2014;21(10):42-45. Available at: https://hearingreview.com/2014/10/new-binaural-strategies-enhanced-hearing

  5. Powers T, Froehlich M. Clinical results with a new wireless binaural directional hearing system. Hearing Review. 2014;21(11):32-34. Available at: https://hearingreview.com/2014/10/clinical-results-new-wireless-binaural-directional-hearing-system

  6. Froehlich M, Freels K, Powers T. Speech Recognition Benefit Obtained from Binaural Beamforming Hearing Aids: Comparison to Omnidirectional and Individuals with Normal Hearing. May 28, 2015. Available at: http://www.audiologyonline.com/articles/speech-recognition-benefit-obtained-from-14338

  7. Herbig R, Barthel R, Branda E. A history of e2e wireless technology. Hearing Review. 2014;21(2), 34-37. Available at: https://hearingreview.com/2014/03/wireless-hearing-aids-history-e2e-wireless-technology

  8. Varallyay G, Pape S, Meyers C. (2015, June) Automatic Steering: The Director of the binax Soundtrack. June 15, 2015. Available at: http://www.audiologyonline.com/articles/automatic-steering-director-binax-soundtrack-14353

  9. Cox RM, Alexander GC, Gilmore C. (1987) Development of the Connected Speech Test (CST). Ear Hear. 1987 Oct;8(5 Suppl):119S-126S.

Veronika Littmann, PhD

Veronika Littmann, PhD

Veronika Littmann, PhD, is Team Leader of R&D Audiology System Development.

 

Dirk Junius

Dirk Junius

Dirk Junius, PhD, is Head of Global R&D Audiology at Sivantos GmbH in Erlangen, Germany.

 

Eric Branda

Eric Branda, AuD

Eric Branda, AuD, is Senior Manager of Product Management, Sivantos Inc in Piscataway, NJ.

Correspondence can be addressed to HR or Dr Branda at: [email protected]

 

Original citation for this article: Littmann V, Junius D, Branda E. SpeechFocus: 360° in 10 Questions. Hearing Review. 2015;22(11):38.