Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks

The Role of Domain General Working Memory in Predictive Sentence Processing

There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.

Poster A80 in Poster Session A, Tuesday, October 24, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.

Yasemin Gokcen1, David Noelle1, Rachel Ryskin1; 1University of California Merced

To keep up with conversation, humans tend to make predictions about the next word someone will say. Among other neural signatures of prediction, the N400 ERP component tends to be more negative when listeners hear a word that is not predictable from the preceding context (Kutas & Federmeier, 2011). The predictability of a word can be estimated using surprisal, the negative log probability of a word in context, from a neural language model (Levy, 2008; Hahn et al. 2022). However, human memory is imperfect. How do humans maintain the linguistic context for prediction over multiple timescales (e.g., immediately preceding words as well as the larger discourse) and optimize working memory resources such that elements that are most informative for prediction are maintained while others are lost? Working memory processes associated with prefrontal cortex (PFC) have been proposed to perform similar functions, modeled via neural networks with gating mechanisms which learn when to maintain important information and when that maintained information needs to be updated (Servan-Schreiber & Cohen, 1992; Hochreiter & Schmidhuber, 1997). Yet, past fMRI work suggests that the prefrontal regions associated with non-linguistic working memory (the multiple demand network; Fedorenko et al., 2013) are not meaningfully engaged during listening comprehension tasks (Blank et al., 2017; Diachek et al., 2020; Shain et al., 2019). To shed light on this, we are collecting EEG data from a story listening task, using the Natural Stories corpus (Futrell et al., 2020). Data collection is still in progress. N400 and other neural indices of prediction will be extracted for each word in the stories and compared to surprisal values from multiple neural language models with varying context gating mechanisms. Long-short term memory (LSTM) networks include these PFC-like gating mechanisms while recurrent neural network (RNN) models do not. By comparing how well the model surprisal values fit the human neural responses and examining which sentences elicit the most dissimilar surprisal estimates across the models, we can explore the contributions of PFC-like working memory gating mechanisms to linguistic prediction. We predict LSTM data will fit the human N400 data better than the RNN (Aurnhammer & Frank, 2019) as the RNN is much simpler with no gating mechanisms (Tripathi, 2021), and we believe context maintenance and updating is necessary for robust linguistic predictions. We also predict that the LSTM surprisal values will fit the human N400 fairly well, such that as model surprisal increases, N400 values will increase as well. However, this may not hold to the same extent throughout the duration of stories, as both RNNs and LSTMs tend to struggle with memory for context on a longer timescale.

Topic Areas: Speech Perception, Control, Selection, and Executive Processes

SNL Account Login

Forgot Password?
Create an Account

News