Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Distinct brain morphometry patterns revealed by deep learning improve prediction of aphasia severity
There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.
Poster C57 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.
Alex Teghipco1, Roger Newman-Norlund1, Julius Fridriksson1, Christopher Rorden1, Leonardo Bonilha2; 1University of South Carolina, 2Emory University
Mounting evidence suggests that post-stroke aphasia severity depends on the integrity of the brain beyond the stroke lesion. Neuroimaging models combining lesion anatomy with global and regional brain integrity can better explain aphasic symptoms, yet some of the interindividual variability remains unaccounted for. A source of variability that is overlooked by uni- and multivariate models is the spatial interdependence between brain regions or the lesion. Here, we tested whether deep learning with Convolutional Neural Networks (CNN) on whole brain morphometry (i.e., tissue volumes segmented by FSL’s FAST) and lesion anatomy can better predict which individuals with chronic stroke (N=231) have severe aphasia, and whether encoding spatial dependencies in the data might be capable of improving predictions by identifying unique individualized spatial patterns. Over repeats of a nested cross-validation scheme, we show that a tuned CNN achieves significantly higher accuracy and F1 scores than a tuned Support Vector Machine (SVM) that discounts spatial dependencies, even when the SVM is nonlinear or trained on lower-dimensional data by integrating widely used linear or nonlinear dimensionality reduction techniques. Ensemble averaging and stacking model predictions did not improve performance, implying that more conventional machine learning did not provide complementary predictive information to the CNN. Performance parity was only achieved when the SVM was directly trained on the latent features learned by the CNN. The SVM performed nearly as well when trained on higher dimensional feature saliency maps returned by the CNN, but only when saliency was more likely to reflect the unique spatial patterns that could be captured by a CNN. Saliency maps demonstrated that the CNN learned more widely distributed patterns of brain atrophy predictive of aphasia severity, whereas the SVM focused on the area around the lesion. Ensemble clustering of CNN saliency maps revealed roughly a dozen distinct morphometry patterns that were unrelated to lesion size, highly consistent across individuals, and implicated unique brain networks. Although these patterns demonstrated a tendency for patients with severe aphasia to be predicted on the basis of contralateral brain features to the stroke, individualized predictions of severity depended on both ipsilateral and contralateral features outside of the lesion. Our findings illustrate the degree of heterogeneity in the spatial distribution of atrophy in individuals with aphasia, show that these patterns are predictive of severity, and underscore the potential for deep learning to improve prognostication of behavioral outcomes from neuroimaging data, emphasizing the potential benefit of exploiting spatial dependence at different scales in multivariate feature space.
Topic Areas: Disorders: Acquired, Methods