Search Abstracts | Symposia | Slide Sessions | Poster Sessions
Neural correlates of speech segmentation in atypical-language adults
There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2024 to view it. Please go to your Account Home page to register.
Poster A9 in Poster Session A - Sandbox Series, Thursday, October 24, 10:00 - 11:30 am, Great Hall 4
This poster is part of the Sandbox Series.
Panagiotis Boutris1, Alissa Ferry2, Perrine Brusini1; 1University of Liverpool, 2University of Manchester
Speech is processed sequentially and in chunks, i.e. words. Before language learners start acquiring other linguistic aspects (e.g. syntax, see Brusini et al. 2021), they must discover the words. Newborns can start immediately segmenting the speech they hear (Ferry et al., 2016; Fló et al., 2019), utilising a variety of cues. However, it is not clear what drives individual differences in speech-processing and, subsequently, language acquisition, in cases where language is impaired (e.g. dyslexia, SLI/DLD, ASD etc.). There is evidence of “poor segmenters” in dyslexics (Leong & Goswami, 2014), with SLI/DLD (Mainela-Arnold et al. 2014; Marshall, 2009; Obeid et al., 2016) or with ASD (Paul et al, 2005), linked to inability to process cues, such as stress and transisional probabilities (TPs). Although a temporal-sampling issue has been proposed as the underlying factor of poor segmentational skills for dyslexia (see Goswami, 2011), our understanding of the neural and perceptual mechanisms that “fault” in language impairments is still very incomplete. Here, we propose an investigation of the ability of atypical-language adults to use TPs and stress, and their combination, to segment artificial speech. We invited young adults diagnosed with either dyslexia, SLI/DLD and/or ASD to take part in a passive-listening word-learning task while we recorded their EEG. Note that data collection is still ongoing. The stimuli comprised of artificial streams of 6 trisyllabic words. The conditions were: 1) a TP-only stream (Saffran et al); 2) a stress-only, where words were in fixed order (TP=1) and stress was placed on the first syllable; 3) two mixed-cue streams with TPs and stress either on the first syllable or 4) on the last syllable; 5) a random-syllable, and 6) a random-stress stream, where TP was low (TP=0.2-0.5) and stress was randomly assigned to one of the 3 syllables respectively. After listening to each stream, participants performed in a forced-choice task, where they chose between an unstressed stream-word, and an unstressed part-word constructed from the last syllable of a stream-word and the two first syllables of another. To more fully understand any shortcomings of speech tracking, we opted in extracting both power and phase at the frequencies of word- and syllable-onset. Power can inform us whether the brain can follow the frequency of occurrence of each unit (word/syllable); however, we hypothesise that phase might capture more subtle differences of brain alignment to the onset of the units of speech in each condition. Our preliminary results suggest a difficulty in processing mixed cue information; behavioural results revealed a trend towards better performance in the Stress-only condition. Power seems to show word-onset tracking for all cue conditions; in contrast, phase seems to be more precise in the Stress-only and TP-only conditions, suggesting an inability to integrate both cues to extract word-boundary information. As our preliminary results suggest, phase might be a better link between poor language performance and the underlying inability of cue integration; furthermore, language impairement is possibly guided by a more fractured speech-processing, where a variety of cues cannot be used all at once.
Topic Areas: Speech Perception, Disorders: Developmental