Presentation

Search Abstracts | Symposia | Slide Sessions | Poster Sessions

Do deaf readers pre-activate phonology during sentence comprehension?

Poster Session D, Saturday, October 26, 10:30 am - 12:00 pm, Great Hall 3 and 4

Zed Sehyr1, Manuel Perea2, Mairéad MacSweeney3, Marta Vergara-Martínez2, Eva Gutierrez-Sigut4; 1Chapman University, 2University of València, 3University College London, 4University of Essex

The ability of readers to use linguistic and contextual cues to predict upcoming words in a sentence is well-documented. This predictive processing allows for more efficient recognition of words, particularly when the context of a sentence is highly predictable (Federmeier & Kutas, 1999; Metusalem et al., 2012). Evidence suggests that phonological pre-activation could also facilitate word recognition (Ito et al., 2016). However, it remains unclear whether prelingually deaf readers, who rely primarily on visual rather than auditory input, engage in similar predictive processes. This study aims to investigate to what extent deaf readers pre-activate semantic and phonological information during sentence comprehension. We hypothesize that, like hearing readers, deaf readers will show semantic pre-activation but may differ in phonological pre-activation due to their unique linguistic experiences. We recorded electroencephalograms (EEGs) from deaf and hearing participants as they read 224 high-cloze probability sentences (e.g., “Pete broke his arm and had to wear a …”) presented one word at a time centered on a computer screen. Participants answered comprehension questions following each sentence. The critical sentence-final words were manipulated across four conditions: 1) Congruent (cast), 2) Semantically incongruent (wall), 3) Pseudohomophone (kast), and 4) Orthographic control pseudoword (yast). Low-cloze (<30%) probability sentences were included as fillers. Of all items, 30% contained a pseudoword. Only correctly answered sentences were analyzed. Preliminary data from 5 deaf and 8 hearing participants (data collection is ongoing and full analysis will be presented the conference) showed a typical N400 response for semantically incongruent words (wall) compared to congruent words (cast) in both groups, indicating semantic pre-activation. Interestingly, differences emerged in the Late Positive Complex (LPC) responses: hearing readers exhibited a positive-going LPC for incongruent endings, while deaf readers showed a negative-going deflection for predicted endings, suggesting a continued semantic processing post-lexical access in deaf readers. For hearing readers, the orthographic control (yast) differed from the congruent condition (cast) early on, while the pseudohomophone condition (kast), only differed after 400ms, indicating phonological pre-activation and a possible re-analysis of pseudohomophone as a misspelling. For deaf readers, both orthographic control (yast) and pseudohomophone (kast) differed from the congruent condition (cast) only at the LPC stage. These preliminary findings suggest that while both deaf and hearing readers use semantic pre-activation, their phonological processing strategies diverge. Hearing readers appear to first use of the phonological information available in the pseudohomophone and later engage in reanalysis for phonologically similar pseudowords. Deaf readers seem to engage in different and later processing of both types of pseudowords. Overall, our study provides initial evidence that both deaf and hearing readers pre-activate semantic information during sentence comprehension. However, phonological pre-activation and subsequent reanalysis appear to differ. Our findings will further our understanding of the mechanisms used by deaf readers during reading comprehension.

Topic Areas: Reading, Phonology

SNL Account Login


Forgot Password?
Create an Account

News

Abstract Submissions extended through June 10

Meeting Registration is Open

Make Your Hotel Reservations Now

2024 Membership is Open

Please see Dates & Deadlines for important upcoming dates.