Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Naturalistic sign language differently synchronizes frontotemporal language network and parietal areas in the brain.
Poster B69 in Poster Session B, Friday, October 25, 10:00 - 11:30 am, Great Hall 4
Maria Zimmermann1, Piotr Tomaszewski2, Olga Kolosowska3, Marcin Szwed3, Marina Bedny1; 1Johns Hopkins University, 2University of Warsaw, 3Jagiellonian University
Sign languages, which are perceived through the visual modality, offer important insights into the origins of language neurobiology. What aspects of the canonical ‘language network’ are modality independent and which are specific to audition vs. vision? Here we investigate this question by using fMRI to study comprehension of naturalistic stories. Naturalistic fMRI approaches offer complementary insights to controlled experiments, which have used single words or simple sentences and sometimes remove essential parts of sign languages, such as facial grammar and classifier constructions. Naturalistic fMRI opens a novel path for studying a broad range of linguistic processes, including the discourse process. Prior studies using controlled experiments find an activity for sign-language comprehension and production in frontotemporal networks (Emmorey, et al. 2020) However, there is also some evidence of increased superior parietal involvement in the production and comprehension of spatial component of sign language (Emmorey, et al. 2021). Here, congenitally deaf native signers (n=20) viewed a naturalistic story in Polish Sign Language (PJM). The story was generated by a deaf native signer who interpreted Edgar Allan Poe’s ‘The Fall of the House of Usher.’ The story contains rich structure at many levels of the linguistic hierarchy, from phonology to compositional syntax, semantics, and discourse. Following prior naturalistic fMRI work with spoken languages, participants viewed control stimuli, which were increasingly scrambled versions of the same story (Lerner, et al., 2011). A scrambled sentence condition removes discourse structure, while a scrambled words condition also interferes with sentence-level grammar. We also introduced a novel control condition: a meaningless story composed of pseudo-signs. To facilitate individua-subject analysis of language regions, the language network was localized in each deaf participant using a PJM language localizer experiment (Newman et al., 2015; Fedorenko, et al. 2010). Participants watched sentences in PJM and a perceptually matched control condition consisting of backward sentences overlaid upon each other, thus containing faces and movement but not interpretable. Both whole-cortex and individual-subject ROI analysis revealed synchrony for the intact story throughout the classic frontotemporal language network, bilaterally, including in inferior frontal and lateral temporal areas as well as the ventral occipitotemporal cortex. Frontotemporal areas showed higher synchrony for sentences than words and for words than the non-linguistic control condition, as in prior naturalistic story studies of spoken languages. By contrast, parietal cortices showed enhanced synchrony for stories than all other conditions. This pattern held when we focused specifically on language-responsive voxels in the parietal cortex. This evidence supports the hypothesis that parietal cortices play a unique role in discourse-level processing for sign languages, including processing of some classifier types (Emmorey, et al. 2021).
Topic Areas: Signed Language and Gesture, Meaning: Discourse and Pragmatics