To understand language, we have to remember the words that were uttered and combine them into an interpretation. How does the brain retain information long enough to accomplish this, despite the fact that neuronal firing events are very short-lived? Hartmut Fitz from the Max Planck Institute for Psycholinguistics and his colleagues propose a neurobiological explanation bridging this discrepancy, according to an announcement on the Institute’s website. Neurons change their spike rate based on experience and this adaptation provides memory for sentence processing.

Hartmut Fitz

Did the man bite the dog, or was it the other way around? When processing an utterance, words need to be assembled into the correct interpretation within working memory. One aspect of comprehension is to establish “who did what to whom.” This process of unification takes much longer than basic events in neurobiology, like neuronal spikes or synaptic signaling. Hartmut Fitz, lead investigator at the Neurocomputational Models of Language group at the Max Planck Institute, and his colleagues propose an account where adaptive features of single neurons supply memory that is sufficiently long-lived to bridge this temporal gap and support language processing.

Model Comparisons

Together with researchers Marvin Uhlmann, Dick van den Broek, Peter Hagoort, Karl Magnus Petersson (all Max Planck Institute for Psycholinguistics) and Renato Duarte (Jülich Research Centre, Germany), Fitz studied working memory in spiking networks through an innovative combination of experimental language research with methods from computational neuroscience.

In a sentence comprehension task, circuits of biological neurons and synapses were exposed to sequential language input which they had to map onto semantic relations that characterize the meaning of an utterance. For example, “the cat chases a dog” means something different than “the cat is chased by a dog” even though both sentences contain similar words. The various cues to meaning need to be integrated within working memory to derive the correct message. The researchers varied the neurobiological features in computationally simulated networks and compared the performance of different versions of the model. This allowed them to pinpoint which of these features implemented the memory capacity required for sentence comprehension.

Towards a Computational Neurobiology of Language

They found that working memory for language processing can be provided by the down-regulation of neuronal excitability in response to external input. “This suggests that working memory could reside within single neurons, which contrasts with other theories where memory is either due to short-term synaptic changes or arises from network connectivity and excitatory feedback,” said Fitz.

Their model shows that this neuronal memory is context dependent, and sensitive to serial order which makes it ideally suitable for language. Additionally, the model was able to establish binding relations between words and semantic roles with high accuracy. “It is crucial to try and build language models that are directly grounded in basic neurobiological principles,” said Fitz.

“This work shows that we can meaningfully study language at the neurobiological level of explanation, using a causal modeling approach that may eventually allow us to develop a computational neurobiology of language.”

Original Paper: Fitz H, Uhlmann M, Van den Broek D, Duarte R, Hagoort P, Petersson KM. Neuronal spike-rate adaptation supports working memory in language processing. PNAS. 2020;202000222. DOI:10.1073/pnas.2000222117.

Source: Max Planck Institute, PNAS

Image: Max Planck Institute