Slide Sessions
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams
Slide Session B: Perception
Friday, October 7, 1:30 - 3:00 pm EDT, Regency Ballroom
Chair: Suhail Matar, New York University
When abstract becomes concrete: the neurobiology of naturalistic conceptual processing
Viktor Kewenig1, Gabriella Vigliocco1, Jeremy Skipper1; 1University College London
Most of what we know about the neurobiology of conceptual processing comes from experiments presenting individuals with isolated words and asking them to carry out artificial tasks to activate associated concepts. These studies have identified fixed sets of regions for concepts that do not have a physical referent (abstract) and those that do (concrete). Yet, in natural environments we are exposed to a range of dynamic, multimodal contextual information other than speech, like faces, bodies, objects, etc. Behavioral data suggests that conceptual processing is modulated by such context. However, no study has assessed to what extent this is the case for the underlying neurobiological organization. We investigate processing of a large set of words in naturalistic settings with rich context (watching a movie). Brain activity was estimated using a deconvolution, deriving the brain response function rather than assuming its shape. We predicted the following: (1) neural encodings of concepts are based on meaning-related experiential information processed in a set of corresponding brain regions. To address this, we used an automated web-based meta-analysis as well as a reverse correlation (“Peaks and Valleys Analysis”). (2) There are no fixed sets of regions for abstract and concrete concepts. Instead, activation dynamically changes depending on visual context. Specifically, if abstract concepts are highly embedded (e.g. “science” in the setting of a chemistry experiment), they activate concrete-like structures and vice versa. To test this, we added a “contextual embeddedness” regressor to our model, based on semantic similarity (measured with GloVe) between labels of visual objects present (obtained through automated feature extraction) and verbally produced concepts. Group analysis using linear mixed effect models revealed activation for abstract words in anterior cingulate cortex, thalamus, insula, bilateral medial prefrontal areas and anterior temporal lobe (ATL). Results from the meta-analysis and the reverse correlation showed that these regions were correlated with processing valence, interoception and social based information. Concrete words activated motor and premotor areas, right hemisphere prefrontal areas, visual cortex, precuneus, right inferior frontal gyrus and the bilateral superior temporal lobe (STL). These regions were correlated with processing information about body parts and motion. Overlap was found in STL, ATL and visual cortex. Activation in these regions was related to language in general. Results from the second model revealed that contextual embeddedness modulated activity in regions corresponding to the default mode network (DMN). A comparison between abstract and concrete concepts in high vs low context and the brain maps obtained from (1) showed that in low context conditions, the neurobiological organization of concrete concepts resembled more that of abstract concepts and vice versa. Our results indicate that during real-world conceptual processing, habitual experiences are encoded in a set of related brain regions. However, this underlying neurobiological organization is not fixed. Instead, activation depends on the dynamics of situational context. This conclusion emphasizes the need for incorporating experiential information into models of word meaning. It also suggests a new challenge for reaching more human-like representations in computational language processing: understanding and modeling the dynamic influences of multimodal contextual information.
Single neuron encoding of speech across cortical layers of the human superior temporal gyrus
Matthew Leonard1, Laura Gwilliams1, Kristin Sellers1, Jason Chung1, Barundeb Datta2, Edward Chang1; 1University of California, San Francisco, 2imec
Decades of lesion and brain imaging studies have identified the superior temporal gyrus (STG) as a core area for speech perception in the human brain. However, little is known about how single neurons in human STG encode the properties of speech sounds. Here, we used high-density Neuropixels arrays to record neuronal spiking activity from all cortical layers simultaneously. We recorded from a total of 281 single neurons in mid-posterior STG in three participants while they listened to natural spoken sentences. Neurons exhibited multi-peaked spectro-temporal receptive fields, which correspond to acoustic-phonetic and prosodic speech features. Within single recording sites, tuning was heterogeneous and organized by depth, revealing a previously unknown third dimension of speech feature encoding in STG. We compared single neuron speech-evoked responses across cortical layers with electrocorticography (ECoG) recordings from the cortical surface. High-gamma ECoG activity correlated with neuronal firing along the entire depth, reflecting the diversity of tuning profiles across all cortical layers contributing to the surface ECoG potential. Together, these results demonstrate an important axis of encoding in STG, namely heterogeneous tuning of single neurons to speech features across the cortical laminae. *ML & LG contributed equally
Dorsal striatal contributions to speech sound categorization
Kevin Sitek1, Bharath Chandrasekaran1; 1University of Pittsburgh
Auditory decision making critically depends upon structural connections between superior temporal cortex and dorsal striatum. Auditory corticostriatal connections have been mapped in animal models including non-human primates, where primary auditory cortex preferentially connects to putamen while caudate head receives most of its inputs from anterior superior temporal cortex. However, it is unclear whether human auditory corticostriatal connectivity follows similar organizational principles due to challenges in non-invasively imaging small, deep brain structures. Using a publicly available high-quality, high-resolution diffusion-weighted MRI tractography, we identified structural connectivity streamlines between auditory cortical regions and dorsal striatal regions in a sub-millimeter resolution single-subject in vivo dataset and replicated our findings in a near-millimeter resolution public MRI dataset (n=13 participants). Across the auditory cortical hierarchy, putamen connections were more frequent than caudate connections; only anterior-most superior temporal cortex had meaningful connectivity with caudate, particularly the head of the caudate, and yielded a distinct rightward asymmetry. Finally, we examine the functional relevance of the auditory-putamen connectivity leveraging a well-studied speech categorization task that has yielded robust striatal activation in prior studies conducted at lower resolution (3T). Using ultra-high field 7T MRI, we acquired 1.5 mm isotropic resolution BOLD functional MRI from participants who categorized stimuli on the basis of dynamically varying pitch patterns. 16 stimulus tokens (the monosyllable “di” produced with each of the 4 lexical tones and spoken by 2 male and 2 female talkers) were pseudo randomly presented to participants. After each trial, participants were given minimal feedback (“correct” or “wrong”) based on their previous response. In each participant, we observed robust feedback-based (correct>incorrect) fMRI responses bilaterally in auditory cortex and in dorsal striatum, with the largest striatal clusters in putamen. These putamen clusters are well-aligned with regions that yield robust auditory-corticostriatal structural connectivity, with a rightward asymmetry. Overall, our work demonstrates prioritized connectivity between superior temporal cortex and putamen and is highly suggestive of distinct functional roles for striatal subdivisions in auditory speech categorization.
Decoding semantic relatedness and prediction from EEG: A classification model comparison
Timothy Trammel1, Natalia Khodayari2, Steven J. Luck1, Matthew J. Traxler1, Tamara Y. Swaab1; 1University of California, Davis, 2Johns Hopkins University
While conventional univariate analyses of electroencephalogram (EEG) and event-related potentials (ERP) data continue to provide valuable insights into the neural computations underlying visual word recognition, recent work has shown that multivariate pattern analysis methods using machine-learning classification provide powerful tools to investigate the content of neural computations. Much less is known about the reliability and usefulness of EEG decoding methods to study language processing. EEG decoding studies commonly use classifiers such as support vector machines (SVM), discriminant function analysis (DFA), or random forests (RF), without justification for the classification method chosen. The present study formally compared these models' performance while classifying EEG data from two word-priming studies (visual prediction accuracy priming paradigm and semantic relatedness priming paradigm) to address the following questions: 1) Can SVMs, DFAs, and RFs each classify EEG data according to successful prediction or semantic relatedness? 2) Are there any significant differences between the models when classifying the EEG data? 3) If there are differences, how do the models differ in classification performance? 4) Can classifier performance be replicated across priming paradigms with different tasks? The first study used a predictive priming paradigm (Brothers et al., 2016). Participants (n=45) were presented with a prime word and instructed to actively predict the upcoming target word. The word pairs in each trial were either related (circus – CLOWN) or unrelated (trim – CLOWN). Participants self-reported prediction accuracy. Trials were labeled according to relatedness and prediction accuracy for classification leading to three binary decoding conditions: predicted related vs. unpredicted related, unpredicted related vs. unpredicted unrelated, and predicted related vs. unpredicted unrelated. The second study (n=40) used a relatedness decision task (Kappenman et al., 2021). This paradigm resulted in a single decoding condition: related vs. unrelated word pairs. Decoding analyses were adapted from an SVM-based classification analysis method (Bae & Luck 2018). Decoding was performed over 128 iterations using 10-fold cross-validation to avoid over-fitting and ensure robust decoding performance. The models were compared against chance-level accuracy (50%) and against each other using a cluster-based permutation testing method. Each model was tested using averaged EEG data and using single-trial EEG. For the prediction task, both the prime-locked and target-locked EEG signals were decoded. For the relatedness task, the target-locked EEG signals were decoded. The permutation-based cluster analyses of the models over the time course of the data showed that the SVM significantly outperformed the other classification methods tested: it showed the best EEG classification accuracy in both priming studies and was reliable across the prediction and relatedness tasks: peak decoding accuracy above 90% for the averaged data and above 75% for the single trial data. In future studies, we will use SVM to examine the content of the representations that are pre-activated during language processing.