Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Language familiarity dependent encoding of natural speech in human temporal lobe
Poster E76 in Poster Session E, Thursday, October 26, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
Also presenting in Lightning Talks E, Thursday, October 26, 10:00 - 10:15 am CEST, Auditorium
Ilina Bhaya-Grossman1,2, Matthew Leonard2, Yizhen Zhang2, Laura Gwilliams2, Keith Johnson1, Edward Chang2; 1University of California, Berkeley, 2University of California, San Francisco
Languages differ in the set of contrasting sounds (defined by phonetic features), and the ways in which these sounds are sequenced to produce units of meaning like words. By the time a speaker is proficient in a language, they have had extensive experience and exposure to both of these sources of information, which fundamentally alters how that signal is understood. It remains unknown, however, how such language experience affects the neural encoding and processing of this information at these phonological levels. Here, we exploit the high spatiotemporal resolution of direct high-density electrocorticography (ECoG) to identify neural populations that respond to speech in native vs. unfamiliar languages, and to address the extent to which phonetic and sequence-level encoding is dependent on language familiarity. We recorded ECoG while participants passively listened to natural speech in their native language and a language that was unfamiliar to them (either Spanish or English). We found that both native and unfamiliar languages elicited significant responses to speech in nearly all cortical sites throughout the human temporal lobe, suggesting that the same neural substrates are active regardless of language familiarity. However, within these active populations, the encoding of certain phonological information depended on language familiarity. Specifically, cortical sites in the superior and middle temporal gyrus showed significantly stronger encoding of language-specific sequence information and higher accuracy word boundary decoding in the native language, as compared to the unfamiliar language. In contrast, the encoding of phonetic features was similar regardless of language familiarity. Finally, we found that sequence-level and phonetic features were jointly encoded in a majority of neural populations, suggesting that the neural representation of natural speech in the human temporal lobe integrates information from across these phonological levels. Together, these results demonstrate how language familiarity affects the neural encoding of phonological information, and further, shows that this effect is strongest at the level of phoneme sequences and words.
Topic Areas: Speech Perception, Phonology