Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Do visual speech cues facilitate infants’ neural tracking of speech?
Poster E100 in Poster Session E, Thursday, October 26, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
Also presenting in Lightning Talks E, Thursday, October 26, 10:00 - 10:15 am CEST, Auditorium
Antonia Jordan-Barros1,2, Melis Çetinçelik1, Caroline Rowland1,3, Tineke Snijders1,3,4; 1Max Planck Institute for Psycholinguistics, 2University College London, 3Donders Institute for Brain, Cognition and Behaviour, 4Tilburg University
In face-to-face interactions with their caregivers, infants receive multimodal language input from both the auditory speech signal as well as the visual speech signal on the speaker’s face. Previous research has shown that visual speech cues (i.e., the rhythmic movements of the lips, mouth and jaw) can modify speech perception in adults and infants (Crosse et al., 2015; Tan et al., 2022; Teinonen et al., 2008). Infants between 6-12 months may be especially sensitive to these cues as they attend more to the mouth of a talking face than the eyes (Lewkowicz & Hansen-Tift, 2012). One mechanism argued to play a key role in speech processing in both adult and infant listeners is neural tracking of speech. This refers to the phase-locking of cortical oscillations to the amplitude envelope of the speech signal at multiple frequencies, such as the rate of stressed, syllable or phrasal units. Importantly, visual speech cues can provide additional information about the amplitude envelope of the speech signal, given the close temporal correspondence between the opening and closing of the lips and the acoustic envelope, specifically at the syllable frequency range (Chandrasekaran et al., 2009). Thus, exposure to the visual and auditory input simultaneously during speech perception may aid speech processing by enhancing neural tracking of speech, particularly at the syllable rate (Peelle & Sommers, 2015). The current study investigated whether visual speech cues facilitate infants’ speech processing, indexed by their neural tracking of speech. 32-channel EEG data was recorded from 10-month-old Dutch-learning infants while they watched videos of a native Dutch speaker reciting passages in infant-directed speech. Half of the videos displayed the speaker’s full face (Audiovisual [AV] condition), while in the other half, the speaker’s mouth and jaw were masked with a static block, obstructing the visual speech cues (AV-Block condition). We analysed infants’ neural speech tracking, measured by speech-brain coherence at the stress and syllable rates (1-1.75 and 2.5-3.5 Hz respectively in our stimuli). To investigate whether infants show neural tracking of speech, cluster-based permutation analyses were performed at the stress and syllable rates by comparing real speech-brain coherence to shuffled data, created by randomly pairing the speech envelope with the EEG data. Then, differences in infants’ speech-brain coherence in the AV and Block conditions were tested with cluster-based permutation at the frequencies of interest. Our results (N = 32) indicate that infants show neural tracking at both the stress and syllable rates at all electrode sites (cluster p’s = .002). However, we identified no significant differences in speech-brain coherence between the fully audio-visual vs. block conditions, meaning that infants likely tracked the speech envelope equally well when visual speech cues were present or masked (p’s > .05). These results have important implications for our understanding of both speech processing and language development, as they suggest that neural speech tracking is a robust phenomenon already present in infancy and that infants’ speech processing is not necessarily impaired when visual speech cues are occluded, such as when listening to a speaker wearing a facemask.
Topic Areas: Language Development/Acquisition, Speech Perception