Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Electrophysiological evidence for prediction errors during perception of degraded spoken sentences
There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.
Poster D65 in Poster Session D, Wednesday, October 25, 4:45 - 6:30 pm CEST, Espace Vieux-Port
James Webb1, Ediz Sohoglu1; 1University of Sussex
Prediction facilitates language comprehension but how are predictions combined with sensory input during perception (de Lange et al., 2018)? While there is accumulating evidence showing that top-down predictions have a strong influence on perception, there are two possible ways this could be instantiated. One possibility is that predictions enhance (or ‘sharpen’) neural representations of speech input. The other possibility is that predictions suppress neural representations of speech input so that only unexpected information (‘prediction errors’) are processed further. Previous evidence suggests that cortical speech representations are best explained by prediction error computations rather than the alternative ‘sharpened signal’ account (Blank and Davis, 2016; Sohoglu and Davis, 2020). In both studies, a two-way interaction between sensory detail and prior knowledge was found, which is uniquely consistent with the prediction error account. This interaction arises because when sensory signals are strongly predicted and signal quality is high, there is little prediction error and diminished neural representations. Whereas when signal quality is low, strong predictions lead to increased prediction errors since the acoustic form of speech mismatches with prior expectations. However, these earlier studies used an artificial listening situation in which isolated spoken words were used, with predictions conveyed by written cues. Therefore, it is unclear whether the results generalise to naturalistic listening situations, in which listeners hear strings of words, and predictions are obtained directly from the speech signal itself. In our experiment, listeners (N=30) heard degraded (16-channel noise-vocoded) sentences in which the last word was strongly or weakly predicted by the previous words in the sentence, based on cloze probability (Peele et al., 2020). We also manipulated the signal quality of the final word (two, four and eight vocoder channels). Importantly, all sentences were plausible and semantically coherent. Behaviourally, listeners’ ratings of final word clarity were higher both when signal quality increased and for strongly predicted words. Using TRF analysis (Crosse et al., 2021) of EEG responses to the final word, we measured how prediction strength and signal quality modulated cortical representations of speech acoustic features (spectral and temporal modulations). We observed a significant interaction between prediction strength and signal quality (p = .02) such that neural representations (TRF forward model accuracies) increased with strong predictions but only when signal quality was low. These findings are more consistent with prediction error representations and show that previous findings extend to more naturalistic listening situations. Thus, prediction error computations appear to be a general and central feature of cortical speech processing.
Topic Areas: Speech Perception,