July 10, 2007

A particular resonance pattern in the brain’s auditory processing region appears to be key to its ability to discriminate speech, researchers have found. Researchers David Poeppel and Huan Luo of the University of Maryland-College Park found that the inherent rhythm of neural activity called “theta band” specifically reacts to spoken sentences by changing its phase.

They also noted that the natural oscillation of this frequency provides further evidence that the brain samples speech segments about the length of a syllable. The authors published their findings in the June 21, 2007 issue of the journal Neuron.

The findings represent the first time that such a broad neural response has been identified as central to perceiving the highly complex dynamics of human speech, said the researchers. Previous studies have explored the responses of individual neurons to speech sounds, but not the response of the auditory cortex as a whole.

In their experiments, the researchers asked volunteers to listen to spoken sentences such as “He held his arms close to his sides and made himself as small as possible.” At the same time, the subjects’ brains were scanned using magnetoencephalography. In this imaging technique, sensitive detectors are used to measure the magnetic fields produced by electrical activity in brain regions.

Poeppel and Luo pinpointed the theta band—which oscillates between 4 and 8 cycles per second—as one that changed its phase pattern with unique sensitivity and specificity in response to the spoken sentences. What’s more, as the researchers degraded the intelligibility of the sentences, the theta band pattern lost its tracking resonance with the speech.

The researchers said their findings suggest that the brain discriminates speech by modulating the phase of the continuously generated theta wave in response to the incoming speech signal. What’s more, they said, the time-dependent characteristics of this theta wave suggest that the brain samples the incoming speech in “chunks” that are about the length of a syllable from any given language.

According to the blog, Talking Brain, which is co-moderated by Poeppel, the research shows compelling evidence (based on single trial MEG data) that speech is analyzed using a ~200 ms window:

“How natural speech is represented in the auditory cortex constitutes a major challenge for cognitive neuroscience. Although many single-unit and neuroimaging studies have yielded valuable insights about the processing of speech and matched complex sounds, the mechanisms underlying the analysis of speech dynamics in human auditory cortex remain largely unknown. Here, we show that the phase pattern of theta band (4-8 Hz) responses recorded from human auditory cortex with magnetoencephalography (MEG) reliably tracks and discriminates spoken sentences and that this discrimination ability is correlated with speech intelligibility. The findings suggest that an ~200 ms temporal window (period of theta oscillation) segments the incoming speech signal, resetting and sliding to track speech dynamics. This hypothesized mechanism for cortical speech analysis is based on the stimulus-induced modulation of inherent cortical rhythms and provides further evidence implicating the syllable as a computational primitive for the representation of spoken language.”

 SOURCE: American Association for the Advancement of Science (AAAS) and Talking Brain blog