Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Intracranial EEG investigation of the neural processing of speech in light of its multiscale dynamics
Poster E84 in Poster Session E, Thursday, October 26, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.
Clément Sauvage1, Benjamin Morillon1; 1Aix Marseille University, Inserm, Institut de Neurosciences des Systèmes (INS), Marseille, France
To obtain a comprehensive understanding of speech perception, it is crucial to expand beyond the dominant "dual-stream" model that primarily focuses on functional neuroanatomy. In addition to this neuroanatomical perspective, achieving a more holistic comprehension of speech perception becomes imperative. This requires integrating a neurophysiological model, capturing local and global oscillatory neural dynamics, as well as an informational model that elucidates cognitive algorithms and representational inference processes. This is particularly important because speech, as a temporal signal, possesses a hierarchical linguistic structure encompassing notably phonemes, syllables, words, and phrases. Exploring how information is analyzed across these timescales and understanding how they hierarchically combine through network dynamics is essential for unraveling how the human brain links intricate acoustic signals to semantic representations. To explore these dynamics, we collected stereo-electroencephalography (sEEG) data from pharmaco-resistant epileptic patients while they were listening to a 10-minute story in French. We then trained transformer neural networks to predict upcoming words, syllables, or phonemes, and investigated the neural correlates of continuous entropy and surprise values at these three linguistic timescales. We performed a multivariate Temporal Response Function (mTRF) analysis, to investigate the neural frequencies encoding each linguistic feature and to explore dynamic and spatial characteristics of these representations. We anticipate revealing a spatio-temporal gradient mapping, exemplifying the shift from low-level local to high-level distributed linguistic predictive features. Moreover, we expect different linguistic features to be encoded by complementary neural frequencies. Finally, we are considering future work involving functional connectivity analysis, which would provide additional insights into the dynamic functional hierarchy linking the different linguistic processing stages. Importantly, this work serves important theoretical goals by offering a critical test to determine the extent to which neural oscillations play a fundamental role in the computational processes of speech processing. It has the potential to define the intricate mapping between speech and neural timescales, shedding light on how information is transferred and combined across the linguistic computational processing hierarchy.
Topic Areas: Speech Perception, Speech-Language Treatment