Presentation
Search Abstracts | Symposia | Slide Sessions | Poster Sessions | Poster Slams
Sentence predictability modulates the auditory N1 event-related potential component
Poster E53 in Poster Session E, Saturday, October 8, 3:15 - 5:00 pm EDT, Millennium Hall
This poster is part of the Sandbox Series.
McCall E Sarrett1, Joseph C Toscano1; 1Villanova University
As spoken language unfolds over time, listeners must analyze a rapidly changing auditory signal, which varies in how predictable subsequent segments are. To accomplish this, multiple levels of linguistic analysis must be carried out simultaneously: Listeners accumulate predictions for upcoming segments based on higher-level information—such as sentence context—as they concurrently parse incoming acoustic information. The event-related potential (ERP) technique has been useful in elucidating some of the mechanisms supporting these processes. In particular, the auditory N1 is sensitive to differences in acoustic cues that signal phonetic differences, such as Voice Onset Time (VOT). Short VOTs (voiced sounds; e.g., /b,d,g/) yield a more negative N1, whereas long VOTs (voiceless sounds; e.g., /p,t,k/) yield a less negative N1. Prior work has shown that ambiguous acoustic cues (e.g., between /b/ and /p/) are susceptible to feedback from higher-level linguistic influences. An ambiguous cue occurring in a /b/-biasing context will yield a more negative N1, whereas that same cue in a /p/-biasing context will yield a less negative N1 (consistent with how these sounds are encoded). The present study seeks to better characterize the nature of this process during auditory sentence processing. To do this, we manipulated how well sentences predicted a sentence-final target word along two dimensions—cloze probability (how strongly a word is predicted) and entropy (how many other possible words could reasonably be expected)—and measured N1 responses to an acoustically ambiguous target word. Participants identified which phoneme the target word started with in a six-alternative forced-choice task (/b,d,g,p,t,k/). EEG data were collected using a 32-channel BrainVision actiCHamp, with electrodes placed according to the International 10-20 system. Data were recorded continuously, referenced online to the left mastoid and re-referenced offline to the average mastoid, and digitized at 500 Hz. Then, data were filtered from 0.1 to 30 Hz and epoched to the start of the sentence-final target word with a 200 ms baseline. If the influence of higher-level information on acoustic encoding is driven in part by the activation of specific lexical items, then cloze probability and entropy should modulate the strength and specificity of that activation, which in turn should affect the strength of feedback that influences the N1. Thus, we hypothesize that higher cloze probability or lower entropy would show a larger shift in mean N1 amplitude, according to the bias of the preceding sentence (i.e. whether a voiced or voiceless sound was expected). Preliminary results (N=22) show that sentence bias significantly shifts listeners’ categorization of an ambiguous target word; this effect is significantly stronger for higher cloze probabilities. Moreover, sentence bias and entropy modulate listeners’ reaction times when making this phoneme decision. Finally, ERP analyses indicate a significant main effect of cloze probability on N1 amplitude, but no effects of sentence bias or entropy. However, further data collection may be needed to detect such effects and their interactions. This work will help disentangle the influences of cloze probability, entropy, and sentence bias on acoustic encoding, and will give insight into the neural mechanisms supporting these dynamic interactions during speech perception.
Topic Areas: Speech Perception, Perception: Auditory