A group of researchers at The John Hopkins University in Baltimore is trying to understand how the brain makes sense of complex auditory environments, such as a cocktail party. The team is testing how humans track sound patterns over time, and under what circumstances the brain registers that the pattern has been broken. The preliminary findings were presented at the Acoustics 2012 meeting in Hong Kong in May.

“When a person hears a sound, both what we call ‘bottom-up’ and ‘top-down’ processes occur in the brain,” says Mounya Elhilali, assistant professor at the Center for Language and Speech Processing at John Hopkins. Hearing a whole range of sounds in a room is a bottom-up process, but choosing to pay attention to a particular voice would be an example of a top-down process, Elhilali explains. “We try to understand the interaction of these two processes,” she says in the press statement.

Elhilali and her colleagues ask volunteers to listen to a series of sounds and press a button when they hear something unusual. For example, the researchers may start out playing violin music and then introduce the sound of a piano. The change to piano music represents a change in the timbre, or sound quality. The researchers also changed the pitch (going from low to high notes or vice versa), and the loudness. As expected, results indicate that humans will perceive these changes as salient sound events that grab their attention.

Forming an expectation about the timbre, pitch, and loudness of sounds, and then realizing that the expectation has been broken, generally, takes a few seconds, says Elhilali, although the scientists are still in the process of fully analyzing their data. With further analysis, the researchers also hope to glean information about how the different expectations interact and what happens when multiple changes, for example loudness and pitch, are made at the same time.

Eventually, the team would like to repeat the experiments while monitoring the volunteers’ brain waves through sensors placed on the skin. This would offer the scientists a glimpse at the neural changes that take place in the brain as the sound scene changes.

The ultimate aim is to understand how the brain adapts to different acoustic environments, says Elhilali. Engineers might be able to use the knowledge to design better hearing aids, voice recognition software, and recording equipment.

Elhilali’s group, which is based out of the Electrical and Computer Engineering Department, also works on such technological applications of the research. “When a person walks into a room, they first gather information that will help them adjust to the acoustic scene,” says Elhilali. He adds that hearing aids and other human-designed sound processing technologies may one day be just as adaptive as the human brain.

More about the 163rd ASA meeting can be found here


SOURCE:
The Acoustical Society of America (ASA)