Symposia
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Can we investigate linguistic modularity in the brain with non-modular NLP systems?
Tuesday, October 24, 1:30 - 3:30 pm CEST, Auditorium
Organizer: Shailee Jain1,2; 1The University of Texas at Austin, 2The University of California, San Francisco
Presenters: Gina Kuperberg, Leila Wehbe, Andrea E. Martin, Alexander G. Huth, Christophe Pallier
The rise of natural-language-processing (NLP) systems has led to an overwhelming paradigm shift in language neuroscience. These systems jointly capture linguistic information at many different levels like syntax, semantics and discourse. Proponents of NLP-based brain modeling believe that this integration facilitates investigation of multiple brain processes simultaneously, unlike traditional paradigms that target specific manipulations. Critics believe that their lack of modularity and “black-box” nature hampers isolation of distinct processes and leads to biases. Our field has reached an impasse where “global organization” claims from NLP-based models often contradict “functional localization” claims from controlled experiments, lesion-mapping and neural disorders. In a rare event, we bring together neurolinguists/psychologists/computer-scientists for a general audience to discuss which brain mechanisms we can infer from these vastly different artificial systems, perils of NLP-systems not grounded in linguistic theory vs. pigeonholing brain function into known theories and utility of NLP interpretability tools to isolate distinct brain processes.
Presentations
The Potential and Limitations of Large Language Models for Understanding Predictive Language Processing
Gina Kuperberg1,2; 1Tufts University, 2Massachusetts General Hospital
By being trained to predict upcoming words, Large Language Models (LLMs) have achieved tremendous success in generating human-like language. Given the brain’s sensitivity to the contextual predictability of incoming words during language comprehension, this is not surprising. I will discuss several ERP/MEG studies that illustrate the use of LLMs to explore where, when, and how the brain processes language in healthy individuals and how this breaks down in schizophrenia. However, I will argue that state-of-the-art transformer LLMs cannot provide complete understanding of the brain's predictive mechanisms because their architecture differs significantly from that of the human cortex. Instead, a more fruitful approach may be to explore more biologically plausible and cognitively interpretable architectures, such as predictive coding. I will present simulations showing that predictive coding not only explains the brain’s sensitivity to contextual predictability, but also its neural dynamics and sensitivity to various lexical variables, priming, and their higher-order interactions.
Testing neurobiology-of-language theories in the wild with NLP
Leila Wehbe1; 1Carnegie Mellon University
Naturalistic experiments, by their use of complex language stimuli, allow us to study language processes in the wild, and to test if theories built on controlled stimuli generalize to the natural setting. To analyze these complex experiments, it is necessary to capture high level meaning in a computational object. NLP offers a broad range of tools that can help with this task. While these tools are imperfect models of the brain, they are still the most expressive instruments we have. The language representations extracted from these tools can be carefully combined to create in vitro, computationally-controlled experiments and test different theories about how language information is represented in the brain. These experiments can be used to disentangle the brain representation of composed meaning from individual word meaning, and semantic information from syntactic information.
The lacunae of language models in the neuroscience of language
Andrea E. Martin1,2; 1Max Planck Institute for Psycholinguistics, 2Radbound University
A foundational principle of cognitive science is that the behavior of systems is wildly insufficient to diagnose underlying mechanistic similarity. In the philosophy of science, prediction is insufficient (and unnecessary) for explanatory force in theory (Scheffler, 1957; Shmueli, 2010). Yet in the age of large language models (LLM), we appear to use prediction, rather than explanation, to guide us, at our peril (see Guest & Martin, 2023), towards accounts of brain computation and human language processing. To circumvent explanatory impotence, I suggest we interrogate what would it actually mean if LLM were explanations of behavior, cognition, or neural data. Then I argue that the format of linguistic information in the brain is likely wildly different, not only from LLM, but also from valuable computational- and algorithmic-level descriptions obtained from formal linguistics and psychology. I close by arguing for explanantia that are nonetheless constrained by these disciplines and by neural dynamics.
How can we use large language models to learn about the brain?
Alexander G. Huth1; 1The University of Texas at Austin
Large language models are extremely effective at predicting how the human brain responds to natural language, but what can they tell us about how the brain works? One approach is to equate the computational goal of the brain with the objective the model is trained on. However, inferences of this type are strongly confounded by “multiple realizability”: models trained to solve different problems can develop similar internal representations. Instead, we have pursued two alternative approaches: (1) building language models that are interpretable from the ground up, and (2) applying neural network interpretation techniques to the models that are fit to the brain. I will show how we have used these approaches to study the question of representational timescales across the brain. These results both confirm earlier findings and reveal an exciting new view on how representational timescale and semantic selectivity are related in cortex.
Traditional and NLP-based approaches for studying syntactic representations in the brain
Christophe Pallier1; 1CEA/SAC/JOLIOT/Neurospin center
Do representations proposed in linguistic theories, such as constituent trees, correspond to actual data structures constructed in real-time in the brain during language comprehension? And if so, what are the brain regions involved? This question was investigated in a series of functional magnetic resonance studies using various experimental paradigms, including repetition priming, syntactic complexity manipulation, and NLP models trained on limited corpora. I will argue that while many questions remain unanswered, progress has been made. For example, the results suggest that full syntactic parsing of sentences may not happen automatically, but that local syntactic operations (merge) do. The use of deep learning models to locate syntactic and semantic information in the brain will also be discussed.