Slide Sessions

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Slide Session B

Friday, October 25, 3:30 - 4:30 pm, Great Hall 1 and 2

Chair: Jonathan Peelle, Northeastern University

Talk 1: A central role for semantic knowledge in constructing mental scenes – insights from the syndrome of semantic dementia

Muireann Irish1, Thanh Vinh Cao1, Rebekah Ahmed1, John Hodges1; 1The University of Sydney, Camperdown, NSW 2019, Australia

The syndrome of semantic dementia (SD) has provided compelling insights into the computational architecture of conceptual knowledge and its respective role in supporting other higher-order cognitive endeavours such as memory and creativity. Characterised by profound pan-modal conceptual degradation, SD patients display marked semantic impairments, attributable to the progressive deterioration of a central amodal semantic hub located in the anterior temporal lobes. Mounting evidence suggests that the semantic impairment in SD disrupts the capacity not only to reconstruct events from the past via autobiographical memory, but also the ability to envisage contextually rich events that might unfold in the future. Here we explored how degradation of the semantic knowledge base impacts the mental construction of relatively commonplace scenes in the mind’s eye. Fourteen patients with a clinical diagnosis of left-predominant SD and 24 age- and education- matched healthy older Control participants were recruited. Participants completed a comprehensive battery of neuropsychological tests assessing episodic and semantic memory, visuospatial processing, and executive function, and underwent 3T structural MRI. Scene construction was assessed using the Hassabis et al. (2007) scene construction task, which requires individuals to mentally construct and describe a series of commonplace scenarios (e.g., Beach, Forest, Museum) in rich detail. Narratives were scored in terms of the level of contextual detail, the spatial integration of the scenes, and subjective ratings of vividness, difficulty, and sense of presence. Whole-brain voxel-based morphometry explored brain-behaviour associations between scene construction performance and grey matter intensity, with significant clusters extracted voxel-wise, corrected for False Discovery Rate at q<.05. Relative to Controls, scene construction performance was significantly compromised in SD [F(1,36)=50.1; p<.001], driven by the impoverished provision of contextual details [F(1,36)=36.2; p< .001], spanning all detail subcategories (all p values >.01). Controlling for language function ameliorated these group differences (p >.8). Crucially, subjective ratings of vividness, sense of presence, and task difficulty did not differ between the two groups (all p values >.05), and the scenes generated by SD patients were as spatially integrated as those generated by Controls (p >.05). Correlation analyses in the SD group suggested that overall task performance was strongly associated with measures of semantic processing and executive function (all r values >.68) while spatial coherence was significantly correlated with non-verbal episodic memory retrieval and visuo-constructive abilities (all r values >.64). Voxel-based morphometry analyses indicated significant involvement of the left posterior hippocampus and bilateral anterior temporal cortices in modulating overall scene construction performance, with additional involvement of the left angular gyrus for the provision of contextual details. Our findings suggest that deterioration of the semantic knowledge base in SD disrupts the verbal description of internally generated scene representations of commonplace scenarios but not the subjective mental experience of spatial coherence or vividness. Future studies employing non-verbally demanding tasks will be required to determine the extent to which complex expressions of mental construction remain intact in this syndrome, and the role of posterior parietal cortical regions in supporting these capacities.

Talk 2: Cueing Improves Expressive Emotional Aprosodia in Acute Right Hemisphere Stroke: Investigating Neural and Acoustic Characteristics

Shannon M. Sheppard1, Gabriel Cler1, Sona Patel2, Ji Sook Ahn2, Lynsey Keator3, Isidora Diaz-Carr4, Argye E. Hillis4-5, Alexandra Zezinka Durfee6; 1University of Washington, Seattle, WA, 2Seton Hall University, 3University of Delaware, 4Johns Hopkins University School of Medicine, 5Johns Hopkins University, 6Towson University

Introduction: Right hemisphere (RH) stroke frequently impacts expressive emotional prosody (pitch, rate, loudness, rhythm of speech), resulting in expressive aprosodia. Expressive aprosodia is associated with negative outcomes including reduced social networks (Hewetson et al., 2021), but we still know little about the neural correlates and effective treatments. Expressive aprosodia can arise from impaired motor planning and implementation, or from lack of awareness of the acoustic characteristics that convey specific emotions (e.g., sadness is conveyed with a quiet volume and low pitch). We aimed to 1) identify the specific acoustic features of five emotions (happy, sad, angry, afraid, surprised) that differed between healthy controls and individuals with expressive aprosodia that has not resulted from motor deficits, 2) determine whether aprosodia would improve when providing specific cues (e.g., happy = high pitch, fast rate) for each emotion, and 3) investigate neural correlates. Methods: Patient group: 21 participants with acute RH damage following ischemic stroke and expressive aprosodia were enrolled and tested within five days of hospital admission. Aprosodia diagnosis was confirmed by speech-language pathologists. Control group: 25 healthy age-matched controls. Prosody Testing and Analysis: Speech was recorded while participants completed two tasks: 1) read aloud 20 semantically neutral sentences with a specified emotion (e.g., Happy: “He is going home today.”) without acoustic cues, and 2) read aloud the same sentences with acoustic cues provided (e.g., Happy (Fast Rate, High Pitch). Automated routines in Praat were used to extract acoustic characteristics relevant to each emotion (e.g., fundamental frequency variation, duration) for each sentence. Mixed effects linear models were used to evaluate whether acoustic characteristics of each emotion differed between the control and patient groups, and to determine if cueing improved impaired characteristics of speech. Neuroimaging and Analysis: Acute neuroimaging, including diffusion-weighted Imaging (DWI) was acquired. Areas of ischemia were identified and traced on DWI images using MRICron (Rorden & Brett, 2000). Lesion volume and proportion of damaged tissue to regions of interest (ROIs) in the JHU atlas were calculated. Mixed effects linear models evaluated whether changes to specific acoustic features were predicted by damage to right hemisphere ROIs. Results: The patient group differed from controls only on emotions with positive valence (happy and surprised). They had significantly lower pitch (surprised: p = 0.009; happy: p < 0.001) and slower rate (surprised: p = 0.002, happy: p = 0.002). Cueing helped improve speech rate for happy (p = 0.04) and surprised sentences (p < 0.001), and improved pitch in happy (p < 0.001), but not surprised sentences. Lesion mapping analyses revealed damage to the right putamen, external capsule, fronto-occipital fasciculus, and posterior superior temporal gyrus were implicated in expressive aprosodia. Conclusion: Expressive aprosodia primarily impacts the expression of emotions with positive valence, but providing acoustic cues can improve pitch and speech rate. Damage to both right hemisphere cortical and subcortical structures were implicated in expressive aprosodia. These findings have clinical implications for the development of expressive aprosodia treatments, and contribute to neural and cognitive models of prosody expression.

Talk 3: Relative Brain Age as a Biomarker for Language Function in Acute Aphasia

Sigfus Kristinsson1, John Absher2, Sarah Goncher2, Roger Newman-Norlund1, Natalie Hetherington1, Alex Teghipco1, Chris Rorden1, Leonardo Bonilha1, Julius Fridriksson1; 1University of South Carolina, SC, USA, 2Prisma Health-Upstate, SC, USA

Introduction Although factors such as lesion location, age, and stroke severity account for variability in language function, long-term prognostication remains problematic in aphasia.1-3 Recently, we found that brain age–a neuroimaging-derived measure of brain atrophy–predicted language function at stroke onset and long-term recovery in a small sample of stroke survivors.4 Here, we examined the extent to which brain age explains variability in language performance in a larger, non-selective sample of acute stroke patients. Methods The current study relies on archival data from 1,794 individuals admitted to the Prisma Health-Upstate facility in Greenville, SC (F/M, 889/901; age, 67.815.1y). Participants underwent routine clinical neuroimaging (T1-weighted) and their language performance was assessed by an on-call clinician. After excluding participants with structural brain pathophysiology, MRI data were preprocessed using established procedures and we estimated the brain age of 1,027 participants using the publicly available BrainAgeR analysis pipeline.5,6 To overcome the effects of biased brain age estimates in younger and older individuals, we calculated Relative Brain Age (RBA) as follows6: [RBA = Estimated Brain Age – Expected Brain Age (Estimated Brain Age | Chronological Age)] Estimated Brain Age represents the predicted brain age based on BrainAgeR, whereas Expected Brain Age was calculated by regressing Estimated Brain Age on Chronological Age. Thus, a positive RBA reflects an “older looking brain” and negative RBA a “younger looking brain”, given chronological age. Logistic regression models were constructed to examine the association between RBA and presence/absence of aphasia, and regression models to investigate the effect of RBA on the following behavioral outcomes: NIHSS Language (N=478), WAB Auditory Comprehension (N=52), WAB Yes/No Questions (N=87), WAB Naming (N=87), and WAB Repetition (N=290). Models were adjusted for chronological age, lesion size, and affected hemisphere. Results Our primary analyses revealed a significant interaction between RBA and lesion size (=.001, p<.01) for the prediction of aphasia presence, suggesting that a negative RBA (‘younger looking brain’) is associated with absence of aphasia in case of relatively small lesions. RBA was not associated with performance on any of the continuous language outcomes. In an effort to scrutinize the relationship between RBA and lesion size, we added a binary term reflecting brain resilience (positive/negative RBA). We observed a significant interaction between brain resilience and lesion size for the NIHSS Language Score (=.002, p<.05), WAB Yes/No (=-.002, p<.001), and WAB Naming (=-.003, p<.01), suggesting that preserved brain resilience is predictive of better language performance in smaller lesions only. We similarly observed a significant interaction between brain resilience and lesion in the right hemisphere for WAB Repetition (=-.001, p<.05), and brain resilience and lesion in the left hemisphere for WAB Naming (=-.001, p<.05) and WAB Repetition (=-.001, p<.001). Discussion Our findings suggest brain age explains variability in language performance not accounted for by lesion characteristics and age. In particular, brain resilience emerged as a prominent predictor of language performance. Although a fine-grained analysis is underway, we are encouraged by the positive findings thus far, and contend that our findings promise to inform prognostication procedures in post-stroke aphasia.

Talk 4: A hierarchical ensemble approach to predicting response to phonological versus semantic naming intervention in aphasia using multimodal data

Dirk Den Ouden1, Alex Teghipco1, Sigfus Kristinsson1, Chris Rorden1, Grant Walker, Julius Fridriksson1, Leonardo Bonilha1; 1University of South Carolina, 2University of California, Irvine

Introduction Treatment selection for persons with aphasia (PWA) is aided by improved prediction of treatment response in general, but especially to specific types of intervention. Several studies have attempted to predict response to ‘phonological’ interventions for lexical production, focused on word-form representations, versus ‘semantic’ interventions, focused on meaning representations, based on biographical, behavioral or neurological variables in isolation. Here, we tailored treatment-response predictions to individuals by integrating multimodal information while considering multivariate relationships of various complexities. Methods Out of 93 PWA who received both phonological and semantic interventions, 34% exhibited a clinically-meaningful improvement on the Philadelphia Naming Task ( >9/175 points), with 9 responding to phonological and 23 to semantic treatment. Response was predicted in a nested leave-one-out cross-validation scheme using a set of 345 baseline biographical, behavioral, and neuroimaging variables. Behavioral variables included latent constructs of impaired domains based on our prior modeling efforts (Walker et al., 2018). Neuroimaging variables spanned measures of lesion load, task-based BOLD-response, cerebral blood flow, fractional anisotropy, medial diffusivity, and functional and structural connectivity. Unilateral variables were expressed in proportion to bilateral counterparts. Given that the factors determining treatment response may differ from those determining response to phonological versus semantic treatment, we adopted a flexible hierarchical modeling approach. We first trained a binary classifier to predict general treatment response. Then, a second binary classifier was trained on ‘responders’ to adjudicate between the two interventions. Both classifiers were ensembles of decision trees, boosted using RUSBoost. Model tuning included identification of the most reliably predictive features through stability selection, enhanced by combining multiple complementary algorithms for forming the feature ensemble (Teghipco et al., forthcoming). Results General treatment response was predicted with 77% balanced accuracy and 0.72 AUC (p<0.0001). In the correctly predicted responders, a second model achieved 85% balanced accuracy and 0.79 AUC (p<0.0001). The combined model had an overall balanced accuracy of 72% and AUC of 0.77 (p<0.0001). Different patterns of feature weights drove model performance, and feature importance did not correlate between the two models (p=0.4). Nevertheless, some features were influential across both models, highlighting the complex interaction of latent ability estimates and error types, consistency across multiple baseline picture-naming sessions, and whole brain CBF. Non-responders were more strongly predicted by high ventral functional connectivity, inconsistency of picture-naming errors, lesion-load characteristics, low performance on semantic judgment, reduced CBF, and higher stroke severity. Semantic responders were more strongly predicted by high perilesional temporal-lobe BOLD-response, while phonological responders were more strongly predicted by high fractional anisotropy across the brain, but especially in the dorsal stream. No biographical variables were among the top predictors for treatment response. Conclusions The hierarchical multimodal approach we present here applies machine learning to predict whether PWA will respond to impairment-based naming treatment, and whether ‘responders’ show greater effects of phonologically-focused versus semantically-focused intervention. It yields improved success particularly in the prediction of response to phonological versus semantic intervention. These results can aid in the selection of impairment-based treatment for PWA with naming difficulties.

 

SNL Account Login


Forgot Password?
Create an Account

News

Abstract Submissions extended through June 10

Meeting Registration is Open

Make Your Hotel Reservations Now

2024 Membership is Open

Please see Dates & Deadlines for important upcoming dates.