Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams
Do faces affect nonnative-accented speech comprehension in children? An ERP investigation
Poster B14 in Poster Session B and Reception, Thursday, October 6, 6:30 - 8:30 pm EDT, Millennium Hall
Abigail Cosgrove1, Yushuang Liu1, Sarah Grey2, Janet van Hell1; 1The Pennsylvania State University, 2Fordham University
Spoken language provides listeners with information about the speaker’s identity, such as age, sex, or accentedness. Past research indicates that nonnative-accented speech can challenge processing, especially for listeners with limited experience with nonnative-accented speech (e.g., Grey & Van Hell, 2017). Though, the linguistic information encoded by a speaker’s accent is not the only cue that listeners can use to identify a native or nonnative speaker. Visual cues such as facial information (e.g., cuing the speaker’s ethnicity), may mitigate these effects by providing a salient cue about the speaker’s identity (Fernandez & Van Hell, 2019; McGowan, 2015; Li, Yang, Scherf, & Li, 2013; Zheng & Samuel, 2017). When presented with facial information about the speaker, in addition to the spoken sentences, the neural signatures associated with the processing of nonnative-accented speech changed. Specifically, adults showed an N400 response to semantic violations in both accent conditions, similar to the no face cue study. However, with the added face cues, pronoun violations elicited a biphasic Nref-P600 neural response in non-accented speech, and a P600 in nonnative-accented speech (Grey, Cosgrove & Van Hell, 2020). Grammar processing without the speech cues showed no significant ERP effects for nonnative-accented speech indicating that face cues aid comprehension. For children, accented speech processing might present more difficulty than for adult listeners, because the perception process requires them to correctly map the speech stream produced with an unfamiliar accent onto their stored lexical representations at a time they are still developing their mental lexicon. Indeed, behavioral research has found that children do not achieve adult-level performance in nonnative-accented speech comprehension (e.g., Bent et al., 2019). Critically, the neural basis of this relationship between nonnative-accented speech processing and school aged children is largely unexplored. In the present study, we examined whether presenting faces as a cue to nonnative-speaker identity could aid nonnative-accented speech comprehension, particularly for online neural responses to violations. Using ERPs, we had children (aged 9-11) with little exposure to nonnative-accented speech listen to sentences containing a semantic anomaly or pronoun error (and correctly produced counterparts), produced by Chinese-accented and American-accented speakers of English. Prior to listening to Chinese-accented or American-accented speakers (producing the same sentences as above), listeners saw faces congruent with each speaker’s accent. As outlined above, non-linguistic cues such as facial information of the speaker aid nonnative-speech comprehension (Grey et al., 2020; McGowan, 2015; Li, Yang, Scherf, & Li, 2013; Zheng & Samuel, 2017). In adults, pronoun violations in Chinese-accented English sentences (as well as American-accented English sentences) elicited a P600 like neural response, indicating face cues aided comprehension. Preliminary analyses of the child data, however, indicate that face presentation did not modulate pronoun processing in nonnative-accented speech: children still did not show a neural response to pronoun violations in nonnative-accented speech (but showed sensitivity to pronoun violations in non-accented speech and semantic violations in both accent conditions). This suggests that adults but not children use faces as a cue to speaker identity to aid nonnative-accented speech comprehension.
Topic Areas: Speech Perception, Development