Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Ultra-high-field (7T) fMRI reveals graded semantic structure in the ventral anterior temporal cortex
Poster D15 in Poster Session D, Saturday, October 26, 10:30 am - 12:00 pm, Great Hall 4
Saskia L. Frisby1, Ajay D. Halai1, Christopher R. Cox2, Matthew A. Lambon Ralph1, Timothy T. Rogers3; 1University of Cambridge, 2Louisiana State University, 3University of Wisconsin-Madison
Semantic cognition supports language production and comprehension, object recognition and classification, and understanding of everyday events. The cortical semantic system includes a bilateral “hub” centred on the ventral anterior temporal lobes (vATL) that recasts modality-specific semantic information experienced over time into transmodal and transtemporal representations that express conceptual structure (Lambon Ralph et al., 2017; Jackson et al., 2021). Recent evidence from human intracranial grid electrocorticography (ECoG) showed that voltages recorded from the vATL express graded multi-dimensional semantic structure (Cox et al., 2024) in a code that is distributed across space and changes rapidly over time, especially in very anterior subregions (Rogers et al., 2021). Thus it is not clear whether fMRI can detect the vATL’s semantic code given its limited temporal resolution and the signal dropout and distortion issues that plague vATL imaging (Halai et al., 2014, 2015). To answer this question we collected 7T-fMRI data from 32 participants while they named the same 100 black-and-white line drawings employed in the ECoG study. We used a novel multi-echo, multiband acquisition protocol designed to counteract signal dropout and distortions in the vATL while maintaining enhanced signal-to-noise across the rest of the cortex (Frisby et al., in prep). To test whether the distributed animacy code observed by Rogers et al. (2021) was apparent in fMRI, we constructed a ventral temporal ROI based on the locations of the ECoG grid electrodes and trained L1-regularized logistic regression models to distinguish animate from inanimate stimuli. To test whether the vATL represents graded semantic structure, we used feature-verification norms (Dilkina & Lambon Ralph, 2013) to construct a matrix representing the similarity of each pair of stimuli, decomposed the matrix into three orthogonal semantic dimensions, and trained L1-regularized linear regression models to predict the coordinates of each stimulus on each dimension. Despite the compromised temporal resolution and signal homogeneity challenges, we reliably decoded animacy from same regions covered by the ECoG electrodes in the prior study (cross-validated accuracy > 0.8), including in the ventral ATL centre-point of the semantic “hub” (Lambon Ralph et al., 2017). Classifiers reliably decoded animacy even in very anterior regions where univariate contrast yielded null results. fMRI signals also reliably predicted stimulus coordinates along the first principal semantic dimension (cross-validated correlation > 0.6), but not on the other two, consistent with the prior ECoG results when models used L1 regularization. These results indicate that 7T-fMRI data, like ECoG, can be used in conjunction with multivariate methods to reveal semantic information across the vATL semantic “hub,” including both binary animacy distinctions and continuous, graded representation, and in regions where univariate contrast suggests no such signal exists. This opens the door to studying semantic representation noninvasively in large samples of healthy participants and with full cortical coverage. While we were unable to decode the second and third principal semantic dimensions in this data, discovery of such structure in ECoG data required use of more sophisticated regularizers. To determine whether fMRI can reveal this more subtle semantic structure, future work should adopt a comparable approach.
Topic Areas: Meaning: Lexical Semantics, Methods