Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Lightning Talks
Integrating Face and Acoustic Cues During Native- and Nonnative-accented Speech Processing: The Role of Face Cue Predictability
There is a Poster PDF for this presentation, but you must be a current member or registered to attend SNL 2023 to view it. Please go to your Account Home page to register.
Poster C76 in Poster Session C, Wednesday, October 25, 10:15 am - 12:00 pm CEST, Espace Vieux-Port
This poster is part of the Sandbox Series.
Daisy Lei1, Janet G. van Hell1; 1Pennsylvania State University
Individuals in our diverse linguistic landscape may speak native-accented speech in one context and nonnative-accented speech in a different discourse context for the same language (e.g., heritage speakers in the United States may use native-accented English with friends from school but may use nonnative-accented English with family members). The present study consists of a series of ERP experiments that manipulate the face cue predictability regarding the upcoming speech accent. Specifically, listeners will be introduced to speakers that either have a predictable accent (produce only one accent) or an unpredictable accent (can produce two accents). Our research questions include: 1) Do listeners use face cues to predict the accent of the upcoming speech and how do they integrate face cues and speech accent during online native-accented and nonnative-accented speech processing? 2) How does the predictability of face cues regarding the upcoming speech accent affect the neural correlates of native-accented and nonnative-accented speech processing, at the 2a) word level and 2b) sentence level? Monolingual American English listeners will first be familiarized with each speakers’ accent(s) (only American-accented English, only Chinese-accented English, or both Chinese-accented and American-accented English) via introduction videos. Then, participants will complete either a go/no-go lexical decision task (Exp. 1B) or a sentence processing task (Exp. 2B) while EEG is recorded. A face cue (photo of the speaker) will be concurrently presented as the audio is played. Crucially, there will be a time delay between the onset of the face cue and the onset of the speech signal. In both experiments, following C. Martin et al. (2016), we will analyze the mean amplitude activity during this pre-speech period to examine face cue predictability effects. In Exp. 1B, the go/no-go lexical decision task will involve single word/nonword audio stimuli, and listeners will be asked to press a button when they hear an animal word. Following C. Martin et al. (2016), ERP analyses will be conducted at the N1 time window and the N400 time window to examine lexicality effects. In Exp. 2B, the sentence processing task will involve well-formed sentences and sentences with semantic anomalies or pronoun mismatches. Following Grey et al. (2020), ERPs will be analyzed at the critical semantic or pronoun items manipulated in the sentence, in the N400 time window and the P600 time window, respectively. For both experiments, following the cue integration model (A. Martin, 2016), we predict that ERP effects will be modulated by the face cue predictability (top-down information) and the accent of the speech (bottom-up information). Two additional experiments will first be conducted to examine the neurocognitive mechanisms related to the processing of spoken words (Exp. 1A) and sentences (Exp. 2A) with the same EEG tasks as Experiments 1B (go/no-go lexical decision task) and 2B (sentence processing task), respectively, but without a face cue, as a speech-only control group.
Topic Areas: Speech Perception, Multisensory or Sensorimotor Integration