A next-generation hearing aid which can “see” is being developed at the University of Stirling in Scotland where a research team led by a computer scientist is designing an aid to help users in noisy environments. The new hearing device will use a miniaturized camera that can lip-read, process visual information in real time, and seamlessly fuse and switch between audio and visual cues. According to a University of Stirling announcement, Amir Hussain, PhD, is leading the ambitious joint research project, which has received nearly £500,000 in funding from the UK Government’s Engineering and Physical Sciences Research Council (EPSRC) and industry.
“This exciting world-first project has the potential to significantly improve the lives of millions of people who have hearing difficulties,” said Hussain in the University of Stirling announcement. “Existing commercial hearing aids are capable of working on an audio-only basis, but the next-generation audio-visual model we want to develop will intelligently track the target speaker’s face for visual cues, like lip reading. These will further enhance the audio sounds that are picked up and amplified by conventional hearing aids.”
Hussain explained that the 360° approach to the research team’s software design is expected to open up more environments to hearing device users, enabling them to confidently communicate in noisier settings, with a potentially reduced listening effort. Hussain believes that, in addition to people with hearing loss, the lip reading capabilities of the proposed device could also prove potentially valuable to those communicating in very noisy places where hearing protectors are worn, such as in factories, and in emergency response settings.
Hussain’s team has reportedly been working on a prototype and the recent funding will be allocated towards tackling the key challenge of blending and enhancing appropriately selected audio and visual cues. Stirling psychology professor Roger Watt, PhD, will work with Hussain to help develop new computing models of human vision for real-time tracking of facial features. The researchers report that, once developed, the software prototype will be made available to other researchers worldwide, opening up the opportunity for further work in the field. Future hardware prototyping research will explore the most user-friendly and aesthetically pleasing placements of the mobile mini camera attachment, such as fitting it into a pair of ordinary glasses, a wearable brooch, necklace, or an earring.
Hussain is also collaborating with Jon Barker, PhD, at the University of Sheffield, who has developed biologically-inspired approaches for separating speech sources that will complement the audio-visual enhancement techniques being developed at Stirling. Other project partners include the MRC/CSO Institute of Hearing Research—Scottish Section, and hearing aid manufacturer, Phonak.
“We are excited about the potential ability for this new technology that takes advantage of the similar information presented to the eyes and ears in noisy conversation to aid listening in those difficult situations, a consistent issue for those affected by hearing loss,” said William Whitmer, PhD, MRC/CSO Institute of Hearing Research.
Source: University of Stirling
Photo credits: University of Stirling; © Neosiam | Dreamstime.com