Barry Freeman, PhD

In the modern world of pediatric amplification, options abound, but when should certain features be activated? How do competing tasks interfere with speech perception? Researchers at Arizona State University (ASU) are attempting to answer these questions through a long-term study begun in February 2009 and scheduled to conclude in December 2010.

Researchers there want to pinpoint “the age at which it is safe to assume that a child can attend to different tasks while listening to speech.” It is an important question, and one that investigators at the Berkeley, Calif-based Starkey Hearing Research Center have also explored.

Barry Freeman, PhD, agrees that studying the interaction of hearing loss and cognitive behavior is fertile ground for audiologists. “We are looking at how we can make it easier for individuals who are multitasking, or undergoing different stimuli, and seeing how they handle it from an auditory perception standpoint,” says Freeman, senior director of education and audiology, Starkey Inc, Eden Prairie, Minn. “As we build hearing aids, we are looking at the evidence that would suggest features to maximize the patient’s ability to focus on the speech or primary message—through the competing sounds in the environment.”

As Freeman ponders features such as directional microphones, real ear integrated into the hearing instrument, compression capabilities, and feedback/noise management, he and his colleagues are thirsting for evidence that illuminates a path to maximum benefits. Studies like the one happening at ASU are crucial because they can sift through the amazing new technology and bring the focus back to where it belongs—the child.

Tom Powers, PhD

Thomas Powers, PhD, vice president, audiology and compliance, at Siemens, Piscataway, NJ, believes it is likely that children in the university study may perform better than expected. It depends on the difficulty of the tasks and the level of hearing loss, but Powers’ work with adults suggests the threshold for multitasking may be high. “We did a study a number of years ago with adults who had hearing impairment, and we looked at their ability to multitask in a background noise environment,” says Powers. “Our goal was to look at how well noise reduction circuits could assist adult hearing-impaired individuals. I think we did not challenge them enough and underestimated their ability to focus. I don’t know what the ASU study will come up with, but children always surprise me.”

Powers agrees that technology will only continue to advance, but right now understanding how existing pediatric amplification features interact is crucial for hearing-impaired children growing up in today’s era. The case of directional microphones illustrates the point. A loose consensus opinion exists that directional microphones are not the best for children, because they focus too much in one area. With the stakes so high for children learning speech and language, Powers contends that solidifying the consensus with actual data could have far-reaching effects.

Since young children benefit from hearing their whole environment, the interaction of technology must take all of this into account in a sophisticated way. “We want to fit young children with a hearing aid that is multidirectional and perhaps change that in 2 or 3 years when they are a little bit older,” says Powers. “If the audiologists believe they should have directional microphones in the classroom, we could go that route. Fortunately, the software in these hearing aids for these advanced features gives us the ability to turn them on and off. So the bigger issue is putting these features into the products, and allowing audiologists to decide when they should be activated.”

Noise reduction, speech enhancement algorithms, feedback reduction, and audio input capabilities (such as FM) are now built into devices that are being used on fairly small kids. The ability to turn these features on or off has also made a dramatic change in the fitting process.

A lot of this functionality has come in just the past 5 to 7 years, which Powers says explains the lack of usable data. “The engineers for all of the different companies are doing a better job at designing algorithms that reduce things such as feedback,” enthuses Powers. “We and others have introduced speech-activated circuitry, so when a teacher is speaking, a hearing aid sees that input, and that is the primary signal the child hears. When the FM is silent for a couple of seconds and the teacher is not talking anymore, it goes back to normal processing so kids hear their environment and the other children in the room. Now when the teacher reengages, it automatically goes back to the teacher.”

In the realm of the very young, objective newborn screenings have taken once subjective data and led to the practice of fitting children at far younger ages than a decade ago. “We used to leave it up to the parents to find out if the child was not responding properly,” says Powers. “With newborn screening, certainly a lot of the more severe hearing loss has been caught at birth or in the nursery. If they fail a screening, then we follow it up with testing.”

Jane Auriemmo, AuD, CCC-A

FEAR OF THE UNKNOWN

Jane Auriemmo, AuD, CCC-A, says pediatric clinicians have been reticent to use noise reduction (NR) and directional microphone features in their hearing aid fittings due to the potential loss of audibility and unknown effects on language. Research at ASU and elsewhere could change this position, but for now the hesitancy remains.

Auriemmo points out that in 2008, Patricia Stelmachowicz, PhD, CCC-A, reported results of a study using one type of noise reduction system. Negative effects on speech recognition of children were not demonstrated in that study. “A recent study from Ching et al indicates that in everyday listening situations, children are not at a disadvantage in cases where directional microphones are implemented,” says Auriemmo, who serves as manager of the Pediatric Partnership Program at Widex USA. “Additionally, our research indicates that language performance of school-aged children is stable 1-year-post use of both adaptive directional and noise reduction systems.”

Regarding the ASU study, Auriemmo says an important consideration will be the design and implementation of the noise reduction system that is ultimately used. Since NR systems reduce gain for noise inputs, they may potentially decrease gain of desired signals, such as speech and environmental cues. “If the NR is always active, including for low input levels, the likelihood of a negative impact on speech and environmental cues is increased,” says Auriemmo. “If the system is activated only for higher than conversational input levels, then the potential loss of audibility is limited. Other factors include the acoustic analysis of the particular noise environment and the ability of the NR system to make frequency-specific gain changes that consider the slope and degree of the individual’s hearing loss.”

The question of audibility becomes a crucial issue when distinguishing pediatric and adult needs. According to Auriemmo, some hearing aid fitting trends geared toward adult users may make achieving consistent audibility more difficult in the pediatric fitting. “Open-style fittings have enabled audiologists to help more adults with their hearing loss,” says Auriemmo. “However, open fittings result in output changes that compromise audibility. This can be seen using real ear measurement. A major goal for the pediatric fitting is to ensure audibility. Choices we make for hearing-impaired adults may not be ideal for a child still learning language. That said, clinicians are sometimes forced to make some compromises in the real world, for example, when faced with stenotic ear canals and ‘uncooperative’ pinnae—or children who won’t wear any hearing aid unless it conforms to what they consider cosmetically acceptable.”

Auriemmo believes the single biggest misconception about pediatric amplification is that children “don’t need” advanced technology. “When is there a better opportunity to utilize the highest quality signal and most consistent audibility than on the ears of children learning language?” asks Auriemmo.

George Lindley, PhD, AuD, agrees that advanced processing features make sense at an early age. When implemented appropriately, Lindley believes features such as directionality and digital noise reduction can provide children with improved understanding and/or listening comfort during background noise. “When implemented inappropriately, there is the potential for these features to be detrimental to understanding,” says Lindley, manager, pediatric training and education, Oticon Inc, Somerset, NJ. “This is a critical issue to consider for a child who may not be cognitively capable of manually implementing something like directionality. I believe future research will focus on the appropriateness of the automatic switching capabilities of modern hearing aids when used in the pediatric population.”

Bluetooth wireless technology is beginning to be used more frequently in children’s hearing instruments, and Lindley says this is an area that will continue to develop and grow. “As previously ‘high end’ features become available on devices that are considered entry level,” says Lindley, “pediatric audiologists will more frequently face the question of how to set advanced features. This will hopefully become an area of interest for researchers.”

The biggest challenge now, says Lindley, is the decision-making process for parents and clinicians. With so many choices, finding the right balance of features to best satisfy everyone’s needs is no small chore. “Many children are fitted with basic hearing instruments because of third-party reimbursement limitations,” adds Lindley. “However, there are several features (bandwidth, FM compatibility, and fitting flexibility) that are critical in a pediatric fitting. Therefore, it becomes important to identify the critical components needed in a pediatric hearing aid and make sure to provide these features on all models.”

In the final analysis, too many options is a good problem to have, and after more than 30 years in the hearing industry, Barry Freeman remembers when choices were few. “With children, we only had body-worn hearing aids in the old days,” says Freeman. “We used to worry about soup spilling into the microphone of the hearing aid and cords and cables. Now we have a patent on hydrophobic nano coating materials, so we don’t even worry about moisture anymore. We don’t care if the hearing aid falls in the sink.”


Greg Thompson is a contributing writer to Hearing Review Products. He can be reached via Editor Will Campbell at [email protected].

Photo courtesy of the John Tracy Clinic

View from the JTC

Q&A with Los Angeles-based pediatric audiologists from the John Tracy Clinic

Hearing Review Products (HRP) What have been the main advances in the field of pediatric amplification?

Sandra Mintz, MS, director of audiology; Natalie Feleppelle, AuD, pediatric audiologist; and Melissa Himmelman, MA, pediatric audiologist—all from the John Tracy Clinic (JTC)—respond: A myriad of sophisticated technologies has emerged, such as complex digital signal processing strategies, multichannel compression and programming, dynamic feedback cancellation, advanced noise reduction strategies, binaural synchronization and program/VC coordination, extended frequency range, and frequency compression and transposition options. Additional features include: DAI and FM input options; automatic aux/mic mixing; wireless connectivity to phones and music players; multiple user programs; substantial size reduction to fit young infants; light indicators for parental trouble-shooting; and increased color options/aesthetics that have added to the advances in pediatric hearing aid technology.

Many of these technologies are, in theory, aimed at addressing the specific deficits caused by sensorineural hearing loss. However, caution must be exercised in applying such sophisticated technology when fitting pediatric users due to the lack of empirical evidence of benefits for them. There is not a clear understanding of how these features affect the acoustic signal and listening abilities in the dynamic, uncontrollable environments experienced by children. Along with the rise in technology sophistication, we have seen a substantial rise in hearing aid device costs, which can lead to lack of access for children with hearing loss.

In the future, we can expect to see continued rapid release of new technology aimed to improve specific auditory capabilities and atypical hearing losses, wireless connectivity to other devices, continued miniaturization, and integration of hearing aid technology with technology in classrooms, homes, and phones. There will be implantable technologies applicable to children, integration of hearing aid and cochlear implant technology for bimodal fittings, and more children being candidates for cochlear implants instead of hearing aids.

HRP: What is the reimbursement situation for many pediatric hearing aids?

JTC: Most federal/state-funded programs, such as Medicaid programs, cover hearing aids but reimburse poorly. As a result, hearing aid manufacturers have been forced to create “Medicaid pricing” or special price categories for their products so that lower levels of digital hearing aid technology can be made available to this population.

In most states, Medicaid pricing will not cover the cost of advanced (high-end) digital hearing aids that include new, sophisticated technology and features. Additionally, few private insurance plans have hearing aid benefits, and therefore, children whose families do not qualify for federal/state-funded programs must purchase hearing aids out-of-pocket, resulting in financial constraints and lack of access. As a result, many children with amplification are not using advanced (high-end) digital hearing aids, and do not have access to the more sophisticated technology.

HRP: What has proven not to work when it comes to pediatric devices/amplification?

JTC: Although hearing aid programming software has come a long way, and most manufacturers incorporate estimated real-ear-to-coupler differences by age—an “automatic fit” approach does not result in appropriate fitting parameters for infants and children. Hearing aid fitting requires REM, including RECDs and speech mapping for different input levels using well-researched fitting algorithms such as DSL, to verify the SPL reaching the ear and to appropriately program the hearing aids for degree and configuration of hearing loss. With children, the whole fitting goal changes from “comfortable amplification” philosophies used with adults, to providing comprehensive access to soft, conversational, loud, and distant speech necessary for communication development in infants and young children.

HRP: What has been proven to actually work?

JTC: Multichannel gain and compression provide significant advantages when programming hearing aids to ensure that adequate amplification is afforded across the speech spectrum for different input levels—based on the child’s degree and type of hearing loss. Extended frequency range is beneficial in affording high-frequency speech information and impacts speech perception performance.

HRP: What is the single biggest misconception about pediatric amplification?

JTC: That anyone (audiologists who rarely see hearing-impaired kids, hearing aid dealers, audiology assistants, and ototechs) can appropriately fit hearing aids on infants and children. Fitting and verifying hearing aids on children requires specialized expertise, specialized equipment, more frequent follow-up, knowledge of special fitting techniques, counseling techniques for families, and the willingness and ability to communicate with the pediatric early intervention, educational, and medical teams.

Fitting hearing aids on infants and children requires specific knowledge of the tests used to measure hearing and auditory function in this population, including the uses and limitations of different test measures. For example, special consideration must be made when programming hearing aids based on ABR data, such as how test thresholds (nHL) translate to estimated behavioral hearing level (eHL)—and how this will influence hearing aid programming. The individual must be well versed in the research and philosophies behind pediatric prescriptive methods, including neural plasticity and central auditory development in infants and children. ?

—Greg Thompson