Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Neural Encoding for Language with Supervised Large Language Models

Poster Session C, Friday, October 25, 4:30 - 6:00 pm, Great Hall 3 and 4

Jingyuan Sun1, Marie-Francine Moens2; 1KU Leuven

Neural encoding for language is a critical topic at the intersection of natural language processing and cognitive neuroscience. The recent advancements in deep learning and pre-trained language representations have opened new avenues for improving the precision and efficiency of neural encoding. Using pre-trained language representations in neural encoding allows for a deeper capture of the complex relationships between linguistic stimuli and neural responses, potentially leading to the creation of superior language processing models. Furthermore, exploring the relationship between artificial and neural representations of language can provide valuable insights into the fundamental mechanisms of language processing. Despite extensive research on unsupervised embeddings for English language neural encoding, there is a lack of studies on supervised embeddings for neural encoding in other languages, such as Chinese. Moreover, the few studies that adopt supervised embeddings for neural encoding in English often rely on fine-tuning pre-trained models for task supervision. However, fine-tuning has been shown to distort the pre-trained knowledge, which is inconsistent with the human brain's mechanism that does not require significant reformation of the language network to learn new tasks. To address these gaps, this paper proposes using both fine-tuned and prompt-tuned supervised sentence embeddings to fit a neural encoding model for Chinese. Prompt-tuning, which protects pre-trained knowledge by freezing weights and learning additional embeddings to fit a task, has not been widely explored for neural encoding, and this paper aims to address this gap. In pursuit of this goal, we employ both partial and full fine-tuning as well as prompt-tuning to adapt the pre-trained language model to eight different natural language understanding (NLU) tasks individually. The aim is to discern the influence of task tuning on a Transformer model for neural encoding and identify which tasks result in the best encoding performance. We find that: 1. Prompt-tuning on five of the eight tasks yields supervised representations that significantly exceed fully fine-tuned peers in predicting brain activities in the language network. However, on none of the eight tasks do fine-tuned embeddings significantly outperform the prompt-tuned ones in neural encoding. 2. Tuning on tasks that require a compositional understanding of entities and concepts yields supervised representations that are better at neural encoding than tuning on other tasks. 3. The proportion of tuned parameters highly influences the neural encoding performance of fine-tuned models. In summary, this paper makes three key contributions. First, we propose a novel neural encoding framework with prompt-tuned supervised representations, proving it to be a viable alternative to fine-tuning-based methods. Second, we demonstrate how different tuning methods influence a pre-trained Transformer in neural encoding through comprehensive experiments. Third, our findings indicate that balancing the protection of pre-trained knowledge and learning task-related features is crucial for optimal neural encoding performance. Overall, this work could help us better understand the relationship between task-tuned artificial and brain language representations.

Topic Areas: Computational Approaches, Methods

SNL Account Login


Forgot Password?
Create an Account

News

Abstract Submissions extended through June 10

Meeting Registration is Open

Make Your Hotel Reservations Now

2024 Membership is Open

Please see Dates & Deadlines for important upcoming dates.