Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Leveraging large language models to capture and quantify neural signatures of uncertainty in narrative comprehension

Poster Session B, Friday, October 25, 10:00 - 11:30 am, Great Hall 3 and 4

Yang Lei1, Yongqiang Cai1, Guosheng Ding1; 1Beijing Normal University

Studying uncertainty in human language comprehension has traditionally been challenging due to the inherent difficulty in quantifying this abstract cognitive construct, especially in the context of naturalistic language tasks. In this study, we combined large language models and neuroimaging technologies to investigate the neural correlates of uncertainty during naturalistic language comprehension. By leveraging a large language model, Open Pre-trained Transformers, to quantify the uncertainty associated with predicting upcoming content, we obtained an uncertainty time course aligned with story comprehension. Using a Least Absolute Shrinkage and Selection Operator (LASSO) regression model, we established a relationship between neural signatures and the uncertainty time course, identifying specific neuronal signatures that correlated with the estimated uncertainty. Notably, these neural signatures of uncertainty generalized across different story datasets, suggesting their robustness and independence from specific stimulus characteristics. Furthermore, we found that the uncertainties quantified by the model are distinct from pure hidden layer representations, as they exhibit a stronger association with those specific neuronal signatures than with token embeddings. Additionally, we observed modality-specific effects, with the involvement of the auditory cortex in representing uncertainty, which presents a novel finding not observed in previous studies. Our findings provide empirical evidence that large language models can capture the neural signatures of uncertainty in semantic processing during narrative comprehension. Crucially, these findings reveal the existence of distinct neuronal signatures of uncertainty that are robust across different story datasets. Overall, our work contributes to a deeper understanding of language comprehension and highlights the potential of large language models as powerful tools for investigating uncertainty in naturalistic language tasks.

Topic Areas: Syntax and Combinatorial Semantics, Computational Approaches

SNL Account Login


Forgot Password?
Create an Account

News

Abstract Submissions extended through June 10

Meeting Registration is Open

Make Your Hotel Reservations Now

2024 Membership is Open

Please see Dates & Deadlines for important upcoming dates.