Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Attention warps semantic representations across the human cerebellum
Poster D17 in Poster Session D, Wednesday, October 25, 4:45 - 6:30 pm CEST, Espace Vieux-Port
Amanda LeBel1, Matteo Visconti di Oleggio Castello1, Jack L. Gallant1, Richard B. Ivry1; 1University of California Berkeley
Humans effortlessly shift their attention between different objects and events in the environment. These shifts are not limited to spatial attention but also include conceptual shifts; for example over the course of a conversation. As a limited resource, attention must be allocated in response to task demands. In neuroimaging studies, the effect of attention can be evaluated by examining representational changes across the brain. These studies have shown that, in most cortical regions, including regions associated with goal-directed attention, there is an expanded semantic representation of the attended topic. One exception is observed in regions associated with the ventral attention network where attention results in a reduced semantic representation of the attended topic (Cukur et al, 2013). This work has also shown that attentional effects become larger at relatively more central stages of the processing hierarchy (Cukur et al, 2013; Kastner et al, 2000). Thus, regions in early sensory areas such as V1 show a modest attentional effect while more cognitive regions later in the processing stream show much larger attentional effects. Previously it has been shown that during language perception, the cerebellum primarily represents semantic information (LeBel et al, 2021). However, it remains unclear how attention modulates semantic representations across the cerebellar cortex. In the present study we asked if attentional modulation in the cerebellum looks like attentional modulation in sensory cortical areas with a relatively small magnitude of change, or if the magnitude of attentional modulation is larger, similar to higher order cortical areas. In addition, we asked how the direction of attentional modulation varies across regions within the cerebellum. We collected BOLD fMRI data from four participants who watched short movie clips without sound. In separate runs, participants were instructed to covertly attend to either “people” or “vehicles” while maintaining central fixation (120 minutes of data per participant). The videos were labeled with a 985 dimensional semantic space that was derived from word co-occurrence rates. Banded ridge regression was used to model the relationship between the semantic features and the BOLD response for each voxel. The resulting model weights indicate how semantic information is represented in each voxel. Separate models were fit for each attentional condition. We then measured the attentional shift for each voxel by comparing the similarity of the estimated semantic representation across the two attention conditions. Attentional shifts of visual semantic representations in the cerebellum were of similar magnitude as those observed in higher-order cognitive regions of the cerebral cortex. Furthermore, the direction of the attentional shifts varied across the cerebellum with a bias of the semantic tuning shifting away from the attended semantic target. Moreover, the shifts in the posterior lobe, especially in regions associated with higher-level cognition, were larger in magnitude and away from the attended category suggesting that the semantic tuning in the cerebellum shifts differentially from the cortex during attention.
Topic Areas: Meaning: Lexical Semantics, Computational Approaches