Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Toward an implanted language neuroprosthesis for severe aphasia
There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.
Poster B86 in Poster Session B, Tuesday, October 24, 3:30 - 5:15 pm CEST, Espace Vieux-Port
William Gross1, Max Krucoff1, Leonardo Fernandino1, Jeffrey Binder1; 1Medical College of Wisconsin
Individuals with persistent severe aphasia after stroke typically have limited rehabilitation options. Because further conventional speech therapy is of limited value in this setting, novel therapeutic approaches are needed. This study explored the potential of an implanted neuroprosthesis for decoding lexical semantic targets from neural activity. Prior brain-computer interfaces (BCIs) proposed as communication assistance devices have largely focused on decoding motor articulation of speech, which is only useful in conditions of paralysis where pre-articulatory phonological skills are preserved, such as ALS. Despite impressive advances in this technology, this approach is not applicable to individuals with severe phonological deficits, such as those with stroke-induced aphasia. Here we propose an alternative approach, decoding neural activity associated with lexical-semantic retrieval rather than motor planning. We performed electrocorticographic (ECoG) recordings during an awake craniotomy on a single patient engaged in picture naming and auditory definition naming tasks (151 total trials). Raw ECoG signals were filtered to high gamma power (70-200Hz) and entered into a machine learning model. The model employed several convolutional layers to capture spatial patterns in the ECoG signals followed by transformer layers to efficiently map interactions between electrodes. To leverage more data, the model was first pretrained as an autoencoder on a large corpus of signals recorded from implanted sEEG electrodes, training it on general patterns of intracranial electrical activity. Subsequently, the model was trained on the patient's ECoG data to predict individual lexical concepts. We attempted to predict the vector representation of each word in the experiential model space described by Binder et al. (Cogn Neuropsychol, 2016). We chose to train the model to map ECoG signals onto a continuous semantic space (rather than onto discrete word tokens) to force the model to attend to semantic (rather than phonological) features, and to enable the ability to perform zero-shot learning (i.e., predicting a word that was not part of the training set). Training continued until performance plateaued, and accuracy was tested using rank-accuracy of the predicted vector out of all potential vectors on a reserved sample, replicated 16 times to establish reproducibility. The model performed significantly above chance on the held back sample (average rank accuracy of 54%, p<0.05). We anticipate higher accuracy levels with a greater number of trials. Our study illustrates the successful implementation of a machine learning model that is able to decode word meanings from individual trials using ECoG data. Despite not yet attaining practical accuracy levels, this result affirms the feasibility of this approach.
Topic Areas: Speech-Language Treatment, Meaning: Lexical Semantics